00:00:00.001 Started by upstream project "autotest-per-patch" build number 127169 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 24316 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.160 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.161 The recommended git tool is: git 00:00:00.161 using credential 00000000-0000-0000-0000-000000000002 00:00:00.162 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.213 Fetching changes from the remote Git repository 00:00:00.214 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.241 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.294 > git --version # 'git version 2.39.2' 00:00:00.294 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.319 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.319 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/10/24310/6 # timeout=5 00:00:06.347 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.359 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.371 Checking out Revision 372f1a46acd6f697d572411a452deafc9650d88b (FETCH_HEAD) 00:00:06.371 > git config core.sparsecheckout # timeout=10 00:00:06.384 > git read-tree -mu HEAD # timeout=10 00:00:06.447 > git checkout -f 372f1a46acd6f697d572411a452deafc9650d88b # timeout=5 00:00:06.471 Commit message: "jenkins/autotest: remove redundant RAID flags" 00:00:06.471 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:06.568 [Pipeline] Start of Pipeline 00:00:06.583 [Pipeline] library 00:00:06.584 Loading library shm_lib@master 00:00:06.584 Library shm_lib@master is cached. Copying from home. 00:00:06.597 [Pipeline] node 00:00:06.611 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.612 [Pipeline] { 00:00:06.620 [Pipeline] catchError 00:00:06.621 [Pipeline] { 00:00:06.632 [Pipeline] wrap 00:00:06.641 [Pipeline] { 00:00:06.646 [Pipeline] stage 00:00:06.647 [Pipeline] { (Prologue) 00:00:06.805 [Pipeline] sh 00:00:07.087 + logger -p user.info -t JENKINS-CI 00:00:07.104 [Pipeline] echo 00:00:07.105 Node: GP11 00:00:07.113 [Pipeline] sh 00:00:07.411 [Pipeline] setCustomBuildProperty 00:00:07.424 [Pipeline] echo 00:00:07.425 Cleanup processes 00:00:07.430 [Pipeline] sh 00:00:07.716 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.716 354830 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.727 [Pipeline] sh 00:00:08.010 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.010 ++ grep -v 'sudo pgrep' 00:00:08.010 ++ awk '{print $1}' 00:00:08.010 + sudo kill -9 00:00:08.010 + true 00:00:08.023 [Pipeline] cleanWs 00:00:08.031 [WS-CLEANUP] Deleting project workspace... 00:00:08.031 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.038 [WS-CLEANUP] done 00:00:08.042 [Pipeline] setCustomBuildProperty 00:00:08.054 [Pipeline] sh 00:00:08.338 + sudo git config --global --replace-all safe.directory '*' 00:00:08.422 [Pipeline] httpRequest 00:00:08.444 [Pipeline] echo 00:00:08.445 Sorcerer 10.211.164.101 is alive 00:00:08.453 [Pipeline] httpRequest 00:00:08.457 HttpMethod: GET 00:00:08.458 URL: http://10.211.164.101/packages/jbp_372f1a46acd6f697d572411a452deafc9650d88b.tar.gz 00:00:08.459 Sending request to url: http://10.211.164.101/packages/jbp_372f1a46acd6f697d572411a452deafc9650d88b.tar.gz 00:00:08.472 Response Code: HTTP/1.1 200 OK 00:00:08.473 Success: Status code 200 is in the accepted range: 200,404 00:00:08.473 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_372f1a46acd6f697d572411a452deafc9650d88b.tar.gz 00:00:10.449 [Pipeline] sh 00:00:10.734 + tar --no-same-owner -xf jbp_372f1a46acd6f697d572411a452deafc9650d88b.tar.gz 00:00:10.750 [Pipeline] httpRequest 00:00:10.767 [Pipeline] echo 00:00:10.768 Sorcerer 10.211.164.101 is alive 00:00:10.776 [Pipeline] httpRequest 00:00:10.780 HttpMethod: GET 00:00:10.781 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:10.781 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:10.796 Response Code: HTTP/1.1 200 OK 00:00:10.797 Success: Status code 200 is in the accepted range: 200,404 00:00:10.797 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:31.912 [Pipeline] sh 00:00:32.206 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:35.512 [Pipeline] sh 00:00:35.797 + git -C spdk log --oneline -n5 00:00:35.797 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:35.797 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:35.797 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:35.797 d005e023b raid: fix empty slot not updated in sb after resize 00:00:35.797 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:35.809 [Pipeline] } 00:00:35.822 [Pipeline] // stage 00:00:35.830 [Pipeline] stage 00:00:35.831 [Pipeline] { (Prepare) 00:00:35.847 [Pipeline] writeFile 00:00:35.858 [Pipeline] sh 00:00:36.137 + logger -p user.info -t JENKINS-CI 00:00:36.149 [Pipeline] sh 00:00:36.431 + logger -p user.info -t JENKINS-CI 00:00:36.443 [Pipeline] sh 00:00:36.726 + cat autorun-spdk.conf 00:00:36.726 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.726 SPDK_TEST_NVMF=1 00:00:36.726 SPDK_TEST_NVME_CLI=1 00:00:36.726 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.726 SPDK_TEST_NVMF_NICS=e810 00:00:36.726 SPDK_TEST_VFIOUSER=1 00:00:36.726 SPDK_RUN_UBSAN=1 00:00:36.726 NET_TYPE=phy 00:00:36.733 RUN_NIGHTLY=0 00:00:36.736 [Pipeline] readFile 00:00:36.751 [Pipeline] withEnv 00:00:36.752 [Pipeline] { 00:00:36.761 [Pipeline] sh 00:00:37.042 + set -ex 00:00:37.042 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:37.042 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:37.042 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.042 ++ SPDK_TEST_NVMF=1 00:00:37.042 ++ SPDK_TEST_NVME_CLI=1 00:00:37.042 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.042 ++ SPDK_TEST_NVMF_NICS=e810 00:00:37.042 ++ SPDK_TEST_VFIOUSER=1 00:00:37.042 ++ SPDK_RUN_UBSAN=1 00:00:37.042 ++ NET_TYPE=phy 00:00:37.042 ++ RUN_NIGHTLY=0 00:00:37.042 + case $SPDK_TEST_NVMF_NICS in 00:00:37.042 + DRIVERS=ice 00:00:37.042 + [[ tcp == \r\d\m\a ]] 00:00:37.042 + [[ -n ice ]] 00:00:37.042 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:37.042 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:37.042 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:37.042 rmmod: ERROR: Module irdma is not currently loaded 00:00:37.042 rmmod: ERROR: Module i40iw is not currently loaded 00:00:37.042 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:37.042 + true 00:00:37.042 + for D in $DRIVERS 00:00:37.042 + sudo modprobe ice 00:00:37.042 + exit 0 00:00:37.051 [Pipeline] } 00:00:37.067 [Pipeline] // withEnv 00:00:37.073 [Pipeline] } 00:00:37.089 [Pipeline] // stage 00:00:37.098 [Pipeline] catchError 00:00:37.100 [Pipeline] { 00:00:37.116 [Pipeline] timeout 00:00:37.117 Timeout set to expire in 50 min 00:00:37.118 [Pipeline] { 00:00:37.133 [Pipeline] stage 00:00:37.135 [Pipeline] { (Tests) 00:00:37.149 [Pipeline] sh 00:00:37.435 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.435 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.435 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.435 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:37.435 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:37.435 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:37.435 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:37.435 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:37.435 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:37.435 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:37.435 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:37.435 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:37.435 + source /etc/os-release 00:00:37.435 ++ NAME='Fedora Linux' 00:00:37.435 ++ VERSION='38 (Cloud Edition)' 00:00:37.435 ++ ID=fedora 00:00:37.435 ++ VERSION_ID=38 00:00:37.435 ++ VERSION_CODENAME= 00:00:37.435 ++ PLATFORM_ID=platform:f38 00:00:37.435 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:37.435 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:37.435 ++ LOGO=fedora-logo-icon 00:00:37.435 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:37.435 ++ HOME_URL=https://fedoraproject.org/ 00:00:37.435 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:37.435 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:37.435 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:37.435 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:37.435 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:37.435 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:37.435 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:37.435 ++ SUPPORT_END=2024-05-14 00:00:37.435 ++ VARIANT='Cloud Edition' 00:00:37.435 ++ VARIANT_ID=cloud 00:00:37.435 + uname -a 00:00:37.435 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:37.435 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:38.810 Hugepages 00:00:38.810 node hugesize free / total 00:00:38.810 node0 1048576kB 0 / 0 00:00:38.810 node0 2048kB 0 / 0 00:00:38.810 node1 1048576kB 0 / 0 00:00:38.810 node1 2048kB 0 / 0 00:00:38.810 00:00:38.810 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:38.811 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:38.811 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:38.811 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:38.811 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:38.811 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:38.811 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:38.811 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:38.811 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:38.811 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:38.811 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:38.811 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:38.811 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:38.811 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:38.811 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:38.811 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:38.811 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:38.811 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:38.811 + rm -f /tmp/spdk-ld-path 00:00:38.811 + source autorun-spdk.conf 00:00:38.811 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.811 ++ SPDK_TEST_NVMF=1 00:00:38.811 ++ SPDK_TEST_NVME_CLI=1 00:00:38.811 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.811 ++ SPDK_TEST_NVMF_NICS=e810 00:00:38.811 ++ SPDK_TEST_VFIOUSER=1 00:00:38.811 ++ SPDK_RUN_UBSAN=1 00:00:38.811 ++ NET_TYPE=phy 00:00:38.811 ++ RUN_NIGHTLY=0 00:00:38.811 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:38.811 + [[ -n '' ]] 00:00:38.811 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.811 + for M in /var/spdk/build-*-manifest.txt 00:00:38.811 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:38.811 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:38.811 + for M in /var/spdk/build-*-manifest.txt 00:00:38.811 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:38.811 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:38.811 ++ uname 00:00:38.811 + [[ Linux == \L\i\n\u\x ]] 00:00:38.811 + sudo dmesg -T 00:00:38.811 + sudo dmesg --clear 00:00:38.811 + dmesg_pid=355504 00:00:38.811 + sudo dmesg -Tw 00:00:38.811 + [[ Fedora Linux == FreeBSD ]] 00:00:38.811 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.811 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.811 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:38.811 + [[ -x /usr/src/fio-static/fio ]] 00:00:38.811 + export FIO_BIN=/usr/src/fio-static/fio 00:00:38.811 + FIO_BIN=/usr/src/fio-static/fio 00:00:38.811 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:38.811 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:38.811 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:38.811 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.811 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.811 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:38.811 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.811 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.811 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.811 Test configuration: 00:00:38.811 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.811 SPDK_TEST_NVMF=1 00:00:38.811 SPDK_TEST_NVME_CLI=1 00:00:38.811 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.811 SPDK_TEST_NVMF_NICS=e810 00:00:38.811 SPDK_TEST_VFIOUSER=1 00:00:38.811 SPDK_RUN_UBSAN=1 00:00:38.811 NET_TYPE=phy 00:00:38.811 RUN_NIGHTLY=0 13:29:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:38.811 13:29:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:38.811 13:29:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:38.811 13:29:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:38.811 13:29:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.811 13:29:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.811 13:29:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.811 13:29:35 -- paths/export.sh@5 -- $ export PATH 00:00:38.811 13:29:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.811 13:29:35 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:38.811 13:29:35 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:38.811 13:29:35 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721906975.XXXXXX 00:00:38.811 13:29:35 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721906975.5hqPx4 00:00:38.811 13:29:35 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:38.811 13:29:35 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:38.811 13:29:35 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:38.811 13:29:35 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:38.811 13:29:35 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:38.811 13:29:35 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:38.811 13:29:35 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:38.811 13:29:35 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.811 13:29:35 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:38.811 13:29:35 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:38.811 13:29:35 -- pm/common@17 -- $ local monitor 00:00:38.811 13:29:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.811 13:29:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.811 13:29:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.811 13:29:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.811 13:29:35 -- pm/common@21 -- $ date +%s 00:00:38.811 13:29:35 -- pm/common@21 -- $ date +%s 00:00:38.811 13:29:35 -- pm/common@25 -- $ sleep 1 00:00:38.811 13:29:35 -- pm/common@21 -- $ date +%s 00:00:38.811 13:29:35 -- pm/common@21 -- $ date +%s 00:00:38.811 13:29:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721906975 00:00:38.811 13:29:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721906975 00:00:38.811 13:29:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721906975 00:00:38.811 13:29:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721906975 00:00:38.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721906975_collect-vmstat.pm.log 00:00:38.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721906975_collect-cpu-load.pm.log 00:00:38.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721906975_collect-cpu-temp.pm.log 00:00:38.811 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721906975_collect-bmc-pm.bmc.pm.log 00:00:39.748 13:29:36 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:39.748 13:29:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:39.748 13:29:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:39.748 13:29:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.748 13:29:36 -- spdk/autobuild.sh@16 -- $ date -u 00:00:39.748 Thu Jul 25 11:29:36 AM UTC 2024 00:00:39.748 13:29:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:39.748 v24.09-pre-321-g704257090 00:00:39.748 13:29:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:39.748 13:29:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:39.748 13:29:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:39.748 13:29:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:39.748 13:29:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:39.748 13:29:36 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.748 ************************************ 00:00:39.748 START TEST ubsan 00:00:39.748 ************************************ 00:00:39.748 13:29:36 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:39.748 using ubsan 00:00:39.748 00:00:39.748 real 0m0.000s 00:00:39.748 user 0m0.000s 00:00:39.748 sys 0m0.000s 00:00:39.748 13:29:36 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:39.748 13:29:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:39.748 ************************************ 00:00:39.748 END TEST ubsan 00:00:39.748 ************************************ 00:00:39.748 13:29:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:39.748 13:29:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:39.748 13:29:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:39.748 13:29:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:39.748 13:29:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:39.748 13:29:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:39.748 13:29:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:39.748 13:29:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:39.748 13:29:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:40.006 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:40.006 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:40.264 Using 'verbs' RDMA provider 00:00:50.799 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:00.780 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:00.780 Creating mk/config.mk...done. 00:01:00.780 Creating mk/cc.flags.mk...done. 00:01:00.780 Type 'make' to build. 00:01:00.780 13:29:57 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:00.780 13:29:57 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:00.780 13:29:57 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:00.780 13:29:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.780 ************************************ 00:01:00.780 START TEST make 00:01:00.780 ************************************ 00:01:00.780 13:29:57 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:00.780 make[1]: Nothing to be done for 'all'. 00:01:02.177 The Meson build system 00:01:02.177 Version: 1.3.1 00:01:02.178 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:02.178 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:02.178 Build type: native build 00:01:02.178 Project name: libvfio-user 00:01:02.178 Project version: 0.0.1 00:01:02.178 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:02.178 C linker for the host machine: cc ld.bfd 2.39-16 00:01:02.178 Host machine cpu family: x86_64 00:01:02.178 Host machine cpu: x86_64 00:01:02.178 Run-time dependency threads found: YES 00:01:02.178 Library dl found: YES 00:01:02.178 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:02.178 Run-time dependency json-c found: YES 0.17 00:01:02.178 Run-time dependency cmocka found: YES 1.1.7 00:01:02.178 Program pytest-3 found: NO 00:01:02.178 Program flake8 found: NO 00:01:02.178 Program misspell-fixer found: NO 00:01:02.178 Program restructuredtext-lint found: NO 00:01:02.178 Program valgrind found: YES (/usr/bin/valgrind) 00:01:02.178 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:02.178 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:02.178 Compiler for C supports arguments -Wwrite-strings: YES 00:01:02.178 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:02.178 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:02.178 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:02.178 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:02.178 Build targets in project: 8 00:01:02.178 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:02.178 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:02.178 00:01:02.178 libvfio-user 0.0.1 00:01:02.178 00:01:02.178 User defined options 00:01:02.178 buildtype : debug 00:01:02.178 default_library: shared 00:01:02.178 libdir : /usr/local/lib 00:01:02.178 00:01:02.178 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:02.755 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:03.018 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:03.018 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:03.018 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:03.018 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:03.018 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:03.018 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:03.018 [7/37] Compiling C object samples/null.p/null.c.o 00:01:03.018 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:03.018 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:03.018 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:03.018 [11/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:03.018 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:03.018 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:03.281 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:03.281 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:03.281 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:03.281 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:03.281 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:03.281 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:03.281 [20/37] Compiling C object samples/server.p/server.c.o 00:01:03.281 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:03.281 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:03.281 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:03.281 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:03.281 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:03.281 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:03.281 [27/37] Compiling C object samples/client.p/client.c.o 00:01:03.281 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:03.281 [29/37] Linking target samples/client 00:01:03.281 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:03.544 [31/37] Linking target test/unit_tests 00:01:03.544 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:03.544 [33/37] Linking target samples/server 00:01:03.544 [34/37] Linking target samples/null 00:01:03.544 [35/37] Linking target samples/lspci 00:01:03.544 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:03.544 [37/37] Linking target samples/gpio-pci-idio-16 00:01:03.544 INFO: autodetecting backend as ninja 00:01:03.544 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:03.807 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:04.750 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:04.750 ninja: no work to do. 00:01:09.020 The Meson build system 00:01:09.020 Version: 1.3.1 00:01:09.020 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:09.020 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:09.020 Build type: native build 00:01:09.020 Program cat found: YES (/usr/bin/cat) 00:01:09.020 Project name: DPDK 00:01:09.020 Project version: 24.03.0 00:01:09.020 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:09.020 C linker for the host machine: cc ld.bfd 2.39-16 00:01:09.020 Host machine cpu family: x86_64 00:01:09.020 Host machine cpu: x86_64 00:01:09.020 Message: ## Building in Developer Mode ## 00:01:09.020 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:09.020 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:09.020 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:09.020 Program python3 found: YES (/usr/bin/python3) 00:01:09.020 Program cat found: YES (/usr/bin/cat) 00:01:09.020 Compiler for C supports arguments -march=native: YES 00:01:09.020 Checking for size of "void *" : 8 00:01:09.020 Checking for size of "void *" : 8 (cached) 00:01:09.020 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:09.020 Library m found: YES 00:01:09.020 Library numa found: YES 00:01:09.020 Has header "numaif.h" : YES 00:01:09.020 Library fdt found: NO 00:01:09.020 Library execinfo found: NO 00:01:09.020 Has header "execinfo.h" : YES 00:01:09.020 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:09.020 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:09.020 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:09.020 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:09.020 Run-time dependency openssl found: YES 3.0.9 00:01:09.020 Run-time dependency libpcap found: YES 1.10.4 00:01:09.020 Has header "pcap.h" with dependency libpcap: YES 00:01:09.020 Compiler for C supports arguments -Wcast-qual: YES 00:01:09.020 Compiler for C supports arguments -Wdeprecated: YES 00:01:09.020 Compiler for C supports arguments -Wformat: YES 00:01:09.020 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:09.020 Compiler for C supports arguments -Wformat-security: NO 00:01:09.020 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:09.020 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:09.020 Compiler for C supports arguments -Wnested-externs: YES 00:01:09.020 Compiler for C supports arguments -Wold-style-definition: YES 00:01:09.020 Compiler for C supports arguments -Wpointer-arith: YES 00:01:09.020 Compiler for C supports arguments -Wsign-compare: YES 00:01:09.020 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:09.020 Compiler for C supports arguments -Wundef: YES 00:01:09.020 Compiler for C supports arguments -Wwrite-strings: YES 00:01:09.020 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:09.020 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:09.020 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:09.020 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:09.020 Program objdump found: YES (/usr/bin/objdump) 00:01:09.020 Compiler for C supports arguments -mavx512f: YES 00:01:09.020 Checking if "AVX512 checking" compiles: YES 00:01:09.020 Fetching value of define "__SSE4_2__" : 1 00:01:09.020 Fetching value of define "__AES__" : 1 00:01:09.020 Fetching value of define "__AVX__" : 1 00:01:09.020 Fetching value of define "__AVX2__" : (undefined) 00:01:09.020 Fetching value of define "__AVX512BW__" : (undefined) 00:01:09.020 Fetching value of define "__AVX512CD__" : (undefined) 00:01:09.020 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:09.020 Fetching value of define "__AVX512F__" : (undefined) 00:01:09.020 Fetching value of define "__AVX512VL__" : (undefined) 00:01:09.020 Fetching value of define "__PCLMUL__" : 1 00:01:09.020 Fetching value of define "__RDRND__" : 1 00:01:09.020 Fetching value of define "__RDSEED__" : (undefined) 00:01:09.020 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:09.020 Fetching value of define "__znver1__" : (undefined) 00:01:09.020 Fetching value of define "__znver2__" : (undefined) 00:01:09.020 Fetching value of define "__znver3__" : (undefined) 00:01:09.020 Fetching value of define "__znver4__" : (undefined) 00:01:09.020 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:09.020 Message: lib/log: Defining dependency "log" 00:01:09.020 Message: lib/kvargs: Defining dependency "kvargs" 00:01:09.020 Message: lib/telemetry: Defining dependency "telemetry" 00:01:09.020 Checking for function "getentropy" : NO 00:01:09.020 Message: lib/eal: Defining dependency "eal" 00:01:09.020 Message: lib/ring: Defining dependency "ring" 00:01:09.020 Message: lib/rcu: Defining dependency "rcu" 00:01:09.020 Message: lib/mempool: Defining dependency "mempool" 00:01:09.020 Message: lib/mbuf: Defining dependency "mbuf" 00:01:09.020 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:09.020 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:09.020 Compiler for C supports arguments -mpclmul: YES 00:01:09.020 Compiler for C supports arguments -maes: YES 00:01:09.020 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:09.020 Compiler for C supports arguments -mavx512bw: YES 00:01:09.020 Compiler for C supports arguments -mavx512dq: YES 00:01:09.020 Compiler for C supports arguments -mavx512vl: YES 00:01:09.020 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:09.020 Compiler for C supports arguments -mavx2: YES 00:01:09.020 Compiler for C supports arguments -mavx: YES 00:01:09.020 Message: lib/net: Defining dependency "net" 00:01:09.020 Message: lib/meter: Defining dependency "meter" 00:01:09.020 Message: lib/ethdev: Defining dependency "ethdev" 00:01:09.020 Message: lib/pci: Defining dependency "pci" 00:01:09.020 Message: lib/cmdline: Defining dependency "cmdline" 00:01:09.020 Message: lib/hash: Defining dependency "hash" 00:01:09.020 Message: lib/timer: Defining dependency "timer" 00:01:09.020 Message: lib/compressdev: Defining dependency "compressdev" 00:01:09.020 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:09.020 Message: lib/dmadev: Defining dependency "dmadev" 00:01:09.020 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:09.020 Message: lib/power: Defining dependency "power" 00:01:09.020 Message: lib/reorder: Defining dependency "reorder" 00:01:09.020 Message: lib/security: Defining dependency "security" 00:01:09.020 Has header "linux/userfaultfd.h" : YES 00:01:09.020 Has header "linux/vduse.h" : YES 00:01:09.020 Message: lib/vhost: Defining dependency "vhost" 00:01:09.020 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:09.020 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:09.020 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:09.020 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:09.020 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:09.020 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:09.020 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:09.020 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:09.020 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:09.020 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:09.020 Program doxygen found: YES (/usr/bin/doxygen) 00:01:09.020 Configuring doxy-api-html.conf using configuration 00:01:09.020 Configuring doxy-api-man.conf using configuration 00:01:09.020 Program mandb found: YES (/usr/bin/mandb) 00:01:09.020 Program sphinx-build found: NO 00:01:09.021 Configuring rte_build_config.h using configuration 00:01:09.021 Message: 00:01:09.021 ================= 00:01:09.021 Applications Enabled 00:01:09.021 ================= 00:01:09.021 00:01:09.021 apps: 00:01:09.021 00:01:09.021 00:01:09.021 Message: 00:01:09.021 ================= 00:01:09.021 Libraries Enabled 00:01:09.021 ================= 00:01:09.021 00:01:09.021 libs: 00:01:09.021 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:09.021 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:09.021 cryptodev, dmadev, power, reorder, security, vhost, 00:01:09.021 00:01:09.021 Message: 00:01:09.021 =============== 00:01:09.021 Drivers Enabled 00:01:09.021 =============== 00:01:09.021 00:01:09.021 common: 00:01:09.021 00:01:09.021 bus: 00:01:09.021 pci, vdev, 00:01:09.021 mempool: 00:01:09.021 ring, 00:01:09.021 dma: 00:01:09.021 00:01:09.021 net: 00:01:09.021 00:01:09.021 crypto: 00:01:09.021 00:01:09.021 compress: 00:01:09.021 00:01:09.021 vdpa: 00:01:09.021 00:01:09.021 00:01:09.021 Message: 00:01:09.021 ================= 00:01:09.021 Content Skipped 00:01:09.021 ================= 00:01:09.021 00:01:09.021 apps: 00:01:09.021 dumpcap: explicitly disabled via build config 00:01:09.021 graph: explicitly disabled via build config 00:01:09.021 pdump: explicitly disabled via build config 00:01:09.021 proc-info: explicitly disabled via build config 00:01:09.021 test-acl: explicitly disabled via build config 00:01:09.021 test-bbdev: explicitly disabled via build config 00:01:09.021 test-cmdline: explicitly disabled via build config 00:01:09.021 test-compress-perf: explicitly disabled via build config 00:01:09.021 test-crypto-perf: explicitly disabled via build config 00:01:09.021 test-dma-perf: explicitly disabled via build config 00:01:09.021 test-eventdev: explicitly disabled via build config 00:01:09.021 test-fib: explicitly disabled via build config 00:01:09.021 test-flow-perf: explicitly disabled via build config 00:01:09.021 test-gpudev: explicitly disabled via build config 00:01:09.021 test-mldev: explicitly disabled via build config 00:01:09.021 test-pipeline: explicitly disabled via build config 00:01:09.021 test-pmd: explicitly disabled via build config 00:01:09.021 test-regex: explicitly disabled via build config 00:01:09.021 test-sad: explicitly disabled via build config 00:01:09.021 test-security-perf: explicitly disabled via build config 00:01:09.021 00:01:09.021 libs: 00:01:09.021 argparse: explicitly disabled via build config 00:01:09.021 metrics: explicitly disabled via build config 00:01:09.021 acl: explicitly disabled via build config 00:01:09.021 bbdev: explicitly disabled via build config 00:01:09.021 bitratestats: explicitly disabled via build config 00:01:09.021 bpf: explicitly disabled via build config 00:01:09.021 cfgfile: explicitly disabled via build config 00:01:09.021 distributor: explicitly disabled via build config 00:01:09.021 efd: explicitly disabled via build config 00:01:09.021 eventdev: explicitly disabled via build config 00:01:09.021 dispatcher: explicitly disabled via build config 00:01:09.021 gpudev: explicitly disabled via build config 00:01:09.021 gro: explicitly disabled via build config 00:01:09.021 gso: explicitly disabled via build config 00:01:09.021 ip_frag: explicitly disabled via build config 00:01:09.021 jobstats: explicitly disabled via build config 00:01:09.021 latencystats: explicitly disabled via build config 00:01:09.021 lpm: explicitly disabled via build config 00:01:09.021 member: explicitly disabled via build config 00:01:09.021 pcapng: explicitly disabled via build config 00:01:09.021 rawdev: explicitly disabled via build config 00:01:09.021 regexdev: explicitly disabled via build config 00:01:09.021 mldev: explicitly disabled via build config 00:01:09.021 rib: explicitly disabled via build config 00:01:09.021 sched: explicitly disabled via build config 00:01:09.021 stack: explicitly disabled via build config 00:01:09.021 ipsec: explicitly disabled via build config 00:01:09.021 pdcp: explicitly disabled via build config 00:01:09.021 fib: explicitly disabled via build config 00:01:09.021 port: explicitly disabled via build config 00:01:09.021 pdump: explicitly disabled via build config 00:01:09.021 table: explicitly disabled via build config 00:01:09.021 pipeline: explicitly disabled via build config 00:01:09.021 graph: explicitly disabled via build config 00:01:09.021 node: explicitly disabled via build config 00:01:09.021 00:01:09.021 drivers: 00:01:09.021 common/cpt: not in enabled drivers build config 00:01:09.021 common/dpaax: not in enabled drivers build config 00:01:09.021 common/iavf: not in enabled drivers build config 00:01:09.021 common/idpf: not in enabled drivers build config 00:01:09.021 common/ionic: not in enabled drivers build config 00:01:09.021 common/mvep: not in enabled drivers build config 00:01:09.021 common/octeontx: not in enabled drivers build config 00:01:09.021 bus/auxiliary: not in enabled drivers build config 00:01:09.021 bus/cdx: not in enabled drivers build config 00:01:09.021 bus/dpaa: not in enabled drivers build config 00:01:09.021 bus/fslmc: not in enabled drivers build config 00:01:09.021 bus/ifpga: not in enabled drivers build config 00:01:09.021 bus/platform: not in enabled drivers build config 00:01:09.021 bus/uacce: not in enabled drivers build config 00:01:09.021 bus/vmbus: not in enabled drivers build config 00:01:09.021 common/cnxk: not in enabled drivers build config 00:01:09.021 common/mlx5: not in enabled drivers build config 00:01:09.021 common/nfp: not in enabled drivers build config 00:01:09.021 common/nitrox: not in enabled drivers build config 00:01:09.021 common/qat: not in enabled drivers build config 00:01:09.021 common/sfc_efx: not in enabled drivers build config 00:01:09.021 mempool/bucket: not in enabled drivers build config 00:01:09.021 mempool/cnxk: not in enabled drivers build config 00:01:09.021 mempool/dpaa: not in enabled drivers build config 00:01:09.021 mempool/dpaa2: not in enabled drivers build config 00:01:09.021 mempool/octeontx: not in enabled drivers build config 00:01:09.021 mempool/stack: not in enabled drivers build config 00:01:09.021 dma/cnxk: not in enabled drivers build config 00:01:09.021 dma/dpaa: not in enabled drivers build config 00:01:09.021 dma/dpaa2: not in enabled drivers build config 00:01:09.021 dma/hisilicon: not in enabled drivers build config 00:01:09.021 dma/idxd: not in enabled drivers build config 00:01:09.021 dma/ioat: not in enabled drivers build config 00:01:09.021 dma/skeleton: not in enabled drivers build config 00:01:09.021 net/af_packet: not in enabled drivers build config 00:01:09.021 net/af_xdp: not in enabled drivers build config 00:01:09.021 net/ark: not in enabled drivers build config 00:01:09.021 net/atlantic: not in enabled drivers build config 00:01:09.021 net/avp: not in enabled drivers build config 00:01:09.021 net/axgbe: not in enabled drivers build config 00:01:09.021 net/bnx2x: not in enabled drivers build config 00:01:09.021 net/bnxt: not in enabled drivers build config 00:01:09.021 net/bonding: not in enabled drivers build config 00:01:09.021 net/cnxk: not in enabled drivers build config 00:01:09.021 net/cpfl: not in enabled drivers build config 00:01:09.021 net/cxgbe: not in enabled drivers build config 00:01:09.021 net/dpaa: not in enabled drivers build config 00:01:09.021 net/dpaa2: not in enabled drivers build config 00:01:09.021 net/e1000: not in enabled drivers build config 00:01:09.021 net/ena: not in enabled drivers build config 00:01:09.021 net/enetc: not in enabled drivers build config 00:01:09.021 net/enetfec: not in enabled drivers build config 00:01:09.021 net/enic: not in enabled drivers build config 00:01:09.021 net/failsafe: not in enabled drivers build config 00:01:09.021 net/fm10k: not in enabled drivers build config 00:01:09.021 net/gve: not in enabled drivers build config 00:01:09.021 net/hinic: not in enabled drivers build config 00:01:09.021 net/hns3: not in enabled drivers build config 00:01:09.021 net/i40e: not in enabled drivers build config 00:01:09.021 net/iavf: not in enabled drivers build config 00:01:09.021 net/ice: not in enabled drivers build config 00:01:09.021 net/idpf: not in enabled drivers build config 00:01:09.021 net/igc: not in enabled drivers build config 00:01:09.021 net/ionic: not in enabled drivers build config 00:01:09.021 net/ipn3ke: not in enabled drivers build config 00:01:09.021 net/ixgbe: not in enabled drivers build config 00:01:09.021 net/mana: not in enabled drivers build config 00:01:09.021 net/memif: not in enabled drivers build config 00:01:09.021 net/mlx4: not in enabled drivers build config 00:01:09.021 net/mlx5: not in enabled drivers build config 00:01:09.021 net/mvneta: not in enabled drivers build config 00:01:09.021 net/mvpp2: not in enabled drivers build config 00:01:09.021 net/netvsc: not in enabled drivers build config 00:01:09.021 net/nfb: not in enabled drivers build config 00:01:09.021 net/nfp: not in enabled drivers build config 00:01:09.021 net/ngbe: not in enabled drivers build config 00:01:09.021 net/null: not in enabled drivers build config 00:01:09.021 net/octeontx: not in enabled drivers build config 00:01:09.021 net/octeon_ep: not in enabled drivers build config 00:01:09.021 net/pcap: not in enabled drivers build config 00:01:09.021 net/pfe: not in enabled drivers build config 00:01:09.022 net/qede: not in enabled drivers build config 00:01:09.022 net/ring: not in enabled drivers build config 00:01:09.022 net/sfc: not in enabled drivers build config 00:01:09.022 net/softnic: not in enabled drivers build config 00:01:09.022 net/tap: not in enabled drivers build config 00:01:09.022 net/thunderx: not in enabled drivers build config 00:01:09.022 net/txgbe: not in enabled drivers build config 00:01:09.022 net/vdev_netvsc: not in enabled drivers build config 00:01:09.022 net/vhost: not in enabled drivers build config 00:01:09.022 net/virtio: not in enabled drivers build config 00:01:09.022 net/vmxnet3: not in enabled drivers build config 00:01:09.022 raw/*: missing internal dependency, "rawdev" 00:01:09.022 crypto/armv8: not in enabled drivers build config 00:01:09.022 crypto/bcmfs: not in enabled drivers build config 00:01:09.022 crypto/caam_jr: not in enabled drivers build config 00:01:09.022 crypto/ccp: not in enabled drivers build config 00:01:09.022 crypto/cnxk: not in enabled drivers build config 00:01:09.022 crypto/dpaa_sec: not in enabled drivers build config 00:01:09.022 crypto/dpaa2_sec: not in enabled drivers build config 00:01:09.022 crypto/ipsec_mb: not in enabled drivers build config 00:01:09.022 crypto/mlx5: not in enabled drivers build config 00:01:09.022 crypto/mvsam: not in enabled drivers build config 00:01:09.022 crypto/nitrox: not in enabled drivers build config 00:01:09.022 crypto/null: not in enabled drivers build config 00:01:09.022 crypto/octeontx: not in enabled drivers build config 00:01:09.022 crypto/openssl: not in enabled drivers build config 00:01:09.022 crypto/scheduler: not in enabled drivers build config 00:01:09.022 crypto/uadk: not in enabled drivers build config 00:01:09.022 crypto/virtio: not in enabled drivers build config 00:01:09.022 compress/isal: not in enabled drivers build config 00:01:09.022 compress/mlx5: not in enabled drivers build config 00:01:09.022 compress/nitrox: not in enabled drivers build config 00:01:09.022 compress/octeontx: not in enabled drivers build config 00:01:09.022 compress/zlib: not in enabled drivers build config 00:01:09.022 regex/*: missing internal dependency, "regexdev" 00:01:09.022 ml/*: missing internal dependency, "mldev" 00:01:09.022 vdpa/ifc: not in enabled drivers build config 00:01:09.022 vdpa/mlx5: not in enabled drivers build config 00:01:09.022 vdpa/nfp: not in enabled drivers build config 00:01:09.022 vdpa/sfc: not in enabled drivers build config 00:01:09.022 event/*: missing internal dependency, "eventdev" 00:01:09.022 baseband/*: missing internal dependency, "bbdev" 00:01:09.022 gpu/*: missing internal dependency, "gpudev" 00:01:09.022 00:01:09.022 00:01:09.280 Build targets in project: 85 00:01:09.280 00:01:09.280 DPDK 24.03.0 00:01:09.280 00:01:09.280 User defined options 00:01:09.280 buildtype : debug 00:01:09.280 default_library : shared 00:01:09.280 libdir : lib 00:01:09.280 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:09.280 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:09.280 c_link_args : 00:01:09.280 cpu_instruction_set: native 00:01:09.280 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:09.280 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:09.280 enable_docs : false 00:01:09.280 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:09.280 enable_kmods : false 00:01:09.280 max_lcores : 128 00:01:09.280 tests : false 00:01:09.280 00:01:09.280 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:09.857 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:09.857 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:09.857 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:09.857 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:09.857 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:09.857 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:09.857 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:09.857 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:09.857 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:09.857 [9/268] Linking static target lib/librte_kvargs.a 00:01:09.857 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:09.857 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:09.857 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:09.857 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:09.857 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:09.857 [15/268] Linking static target lib/librte_log.a 00:01:09.857 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:10.429 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.688 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:10.688 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:10.688 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:10.688 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:10.688 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:10.688 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:10.688 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:10.688 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:10.688 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:10.688 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:10.688 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:10.688 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:10.688 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:10.688 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:10.688 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:10.688 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:10.688 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:10.688 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:10.688 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:10.688 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:10.688 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:10.688 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:10.688 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:10.688 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:10.688 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:10.688 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:10.688 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:10.688 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:10.688 [46/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:10.688 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:10.688 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:10.688 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:10.688 [50/268] Linking static target lib/librte_telemetry.a 00:01:10.688 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:10.688 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:10.953 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:10.953 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:10.953 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:10.953 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:10.953 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:10.953 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:10.953 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:10.953 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:10.953 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:10.953 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:10.953 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:10.953 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:11.215 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:11.215 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.215 [67/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:11.215 [68/268] Linking target lib/librte_log.so.24.1 00:01:11.215 [69/268] Linking static target lib/librte_pci.a 00:01:11.215 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:11.476 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:11.476 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:11.476 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:11.476 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:11.476 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:11.739 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:11.739 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:11.739 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:11.739 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:11.739 [80/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:11.739 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:11.739 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:11.739 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:11.739 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:11.739 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:11.739 [86/268] Linking static target lib/librte_ring.a 00:01:11.739 [87/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:11.739 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:11.739 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:11.739 [90/268] Linking target lib/librte_kvargs.so.24.1 00:01:11.739 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:11.739 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:11.739 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:11.739 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:11.739 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:11.739 [96/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:11.739 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:11.739 [98/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:11.739 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:11.739 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:11.739 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:11.739 [102/268] Linking static target lib/librte_meter.a 00:01:11.739 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:11.739 [104/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:11.739 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:11.739 [106/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.739 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:11.739 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:12.002 [109/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.002 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:12.002 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:12.002 [112/268] Linking static target lib/librte_mempool.a 00:01:12.002 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:12.002 [114/268] Linking target lib/librte_telemetry.so.24.1 00:01:12.002 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:12.002 [116/268] Linking static target lib/librte_rcu.a 00:01:12.002 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:12.002 [118/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:12.002 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:12.002 [120/268] Linking static target lib/librte_eal.a 00:01:12.002 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:12.002 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:12.002 [123/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:12.002 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:12.265 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:12.265 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:12.265 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:12.265 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:12.265 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:12.265 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:12.265 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:12.265 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:12.265 [133/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:12.265 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:12.265 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.265 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.265 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:12.528 [138/268] Linking static target lib/librte_net.a 00:01:12.528 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:12.528 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:12.528 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:12.528 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:12.528 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:12.528 [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:12.787 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.787 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:12.787 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:12.787 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:12.787 [149/268] Linking static target lib/librte_cmdline.a 00:01:12.787 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:12.787 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:12.787 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:12.787 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:12.787 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:12.787 [155/268] Linking static target lib/librte_timer.a 00:01:12.787 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.787 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:13.045 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:13.045 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:13.045 [160/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:13.045 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:13.045 [162/268] Linking static target lib/librte_dmadev.a 00:01:13.045 [163/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:13.045 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:13.045 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:13.045 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:13.045 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.045 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:13.303 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:13.303 [170/268] Linking static target lib/librte_compressdev.a 00:01:13.303 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.303 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:13.303 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:13.303 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:13.303 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:13.303 [176/268] Linking static target lib/librte_power.a 00:01:13.303 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:13.303 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:13.303 [179/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:13.303 [180/268] Linking static target lib/librte_mbuf.a 00:01:13.303 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:13.303 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:13.303 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:13.303 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:13.303 [185/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:13.303 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:13.303 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:13.303 [188/268] Linking static target lib/librte_hash.a 00:01:13.560 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:13.561 [190/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.561 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:13.561 [192/268] Linking static target lib/librte_reorder.a 00:01:13.561 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:13.561 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:13.561 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:13.561 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.561 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:13.561 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:13.561 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:13.561 [200/268] Linking static target lib/librte_security.a 00:01:13.561 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.561 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:13.561 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:13.818 [204/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.818 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:13.818 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:13.818 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:13.818 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:13.818 [209/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.818 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.818 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.818 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.818 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:13.818 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:13.818 [215/268] Linking static target drivers/librte_bus_pci.a 00:01:13.818 [216/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.818 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.818 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.818 [219/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.075 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:14.075 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.075 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:14.075 [223/268] Linking static target lib/librte_cryptodev.a 00:01:14.075 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:14.075 [225/268] Linking static target lib/librte_ethdev.a 00:01:14.334 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.266 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.640 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:18.539 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.539 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.539 [231/268] Linking target lib/librte_eal.so.24.1 00:01:18.539 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:18.539 [233/268] Linking target lib/librte_meter.so.24.1 00:01:18.539 [234/268] Linking target lib/librte_pci.so.24.1 00:01:18.539 [235/268] Linking target lib/librte_ring.so.24.1 00:01:18.539 [236/268] Linking target lib/librte_timer.so.24.1 00:01:18.539 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:18.539 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:18.539 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:18.539 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:18.539 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:18.539 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:18.539 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:18.796 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:18.796 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:18.796 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:18.796 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:18.796 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:18.796 [249/268] Linking target lib/librte_mbuf.so.24.1 00:01:18.796 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:19.054 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:19.054 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:19.054 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:19.054 [254/268] Linking target lib/librte_net.so.24.1 00:01:19.054 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:19.054 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:19.054 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:19.054 [258/268] Linking target lib/librte_hash.so.24.1 00:01:19.054 [259/268] Linking target lib/librte_security.so.24.1 00:01:19.054 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:19.313 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:19.313 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:19.313 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:19.313 [264/268] Linking target lib/librte_power.so.24.1 00:01:21.844 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:21.844 [266/268] Linking static target lib/librte_vhost.a 00:01:22.777 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.777 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:22.777 INFO: autodetecting backend as ninja 00:01:22.777 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:23.710 CC lib/log/log.o 00:01:23.710 CC lib/log/log_flags.o 00:01:23.710 CC lib/ut_mock/mock.o 00:01:23.710 CC lib/log/log_deprecated.o 00:01:23.710 CC lib/ut/ut.o 00:01:23.710 LIB libspdk_ut_mock.a 00:01:23.710 LIB libspdk_log.a 00:01:23.710 LIB libspdk_ut.a 00:01:23.968 SO libspdk_ut_mock.so.6.0 00:01:23.968 SO libspdk_log.so.7.0 00:01:23.968 SO libspdk_ut.so.2.0 00:01:23.968 SYMLINK libspdk_ut_mock.so 00:01:23.968 SYMLINK libspdk_ut.so 00:01:23.968 SYMLINK libspdk_log.so 00:01:23.968 CC lib/ioat/ioat.o 00:01:23.968 CC lib/dma/dma.o 00:01:23.968 CC lib/util/base64.o 00:01:23.968 CC lib/util/bit_array.o 00:01:23.968 CC lib/util/cpuset.o 00:01:23.968 CXX lib/trace_parser/trace.o 00:01:23.968 CC lib/util/crc16.o 00:01:23.968 CC lib/util/crc32.o 00:01:23.968 CC lib/util/crc32c.o 00:01:23.968 CC lib/util/crc32_ieee.o 00:01:23.968 CC lib/util/crc64.o 00:01:23.968 CC lib/util/dif.o 00:01:23.968 CC lib/util/fd.o 00:01:23.968 CC lib/util/fd_group.o 00:01:23.968 CC lib/util/file.o 00:01:23.968 CC lib/util/hexlify.o 00:01:23.968 CC lib/util/iov.o 00:01:23.968 CC lib/util/math.o 00:01:23.968 CC lib/util/net.o 00:01:23.968 CC lib/util/pipe.o 00:01:23.968 CC lib/util/strerror_tls.o 00:01:23.968 CC lib/util/string.o 00:01:23.968 CC lib/util/uuid.o 00:01:24.225 CC lib/util/xor.o 00:01:24.225 CC lib/util/zipf.o 00:01:24.225 CC lib/vfio_user/host/vfio_user_pci.o 00:01:24.225 CC lib/vfio_user/host/vfio_user.o 00:01:24.225 LIB libspdk_dma.a 00:01:24.225 SO libspdk_dma.so.4.0 00:01:24.483 SYMLINK libspdk_dma.so 00:01:24.483 LIB libspdk_ioat.a 00:01:24.483 SO libspdk_ioat.so.7.0 00:01:24.483 SYMLINK libspdk_ioat.so 00:01:24.483 LIB libspdk_vfio_user.a 00:01:24.483 SO libspdk_vfio_user.so.5.0 00:01:24.483 SYMLINK libspdk_vfio_user.so 00:01:24.741 LIB libspdk_util.a 00:01:24.741 SO libspdk_util.so.10.0 00:01:24.741 SYMLINK libspdk_util.so 00:01:25.000 CC lib/idxd/idxd.o 00:01:25.000 CC lib/rdma_utils/rdma_utils.o 00:01:25.000 CC lib/json/json_parse.o 00:01:25.000 CC lib/idxd/idxd_user.o 00:01:25.000 CC lib/conf/conf.o 00:01:25.000 CC lib/json/json_util.o 00:01:25.000 CC lib/env_dpdk/env.o 00:01:25.000 CC lib/idxd/idxd_kernel.o 00:01:25.000 CC lib/json/json_write.o 00:01:25.000 CC lib/env_dpdk/memory.o 00:01:25.000 CC lib/env_dpdk/pci.o 00:01:25.000 CC lib/env_dpdk/init.o 00:01:25.000 CC lib/rdma_provider/common.o 00:01:25.000 CC lib/vmd/vmd.o 00:01:25.000 CC lib/env_dpdk/threads.o 00:01:25.000 CC lib/env_dpdk/pci_ioat.o 00:01:25.000 CC lib/vmd/led.o 00:01:25.000 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:25.000 CC lib/env_dpdk/pci_virtio.o 00:01:25.000 CC lib/env_dpdk/pci_vmd.o 00:01:25.000 CC lib/env_dpdk/pci_idxd.o 00:01:25.000 CC lib/env_dpdk/pci_event.o 00:01:25.000 CC lib/env_dpdk/sigbus_handler.o 00:01:25.000 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:25.000 CC lib/env_dpdk/pci_dpdk.o 00:01:25.000 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:25.000 LIB libspdk_trace_parser.a 00:01:25.000 SO libspdk_trace_parser.so.5.0 00:01:25.258 SYMLINK libspdk_trace_parser.so 00:01:25.258 LIB libspdk_conf.a 00:01:25.258 SO libspdk_conf.so.6.0 00:01:25.258 LIB libspdk_rdma_provider.a 00:01:25.258 LIB libspdk_rdma_utils.a 00:01:25.258 LIB libspdk_json.a 00:01:25.258 SO libspdk_rdma_utils.so.1.0 00:01:25.258 SYMLINK libspdk_conf.so 00:01:25.258 SO libspdk_rdma_provider.so.6.0 00:01:25.258 SO libspdk_json.so.6.0 00:01:25.516 SYMLINK libspdk_rdma_utils.so 00:01:25.516 SYMLINK libspdk_rdma_provider.so 00:01:25.516 SYMLINK libspdk_json.so 00:01:25.516 CC lib/jsonrpc/jsonrpc_server.o 00:01:25.516 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:25.516 CC lib/jsonrpc/jsonrpc_client.o 00:01:25.516 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:25.773 LIB libspdk_idxd.a 00:01:25.773 LIB libspdk_vmd.a 00:01:25.773 SO libspdk_idxd.so.12.0 00:01:25.773 SO libspdk_vmd.so.6.0 00:01:25.774 SYMLINK libspdk_idxd.so 00:01:25.774 SYMLINK libspdk_vmd.so 00:01:25.774 LIB libspdk_jsonrpc.a 00:01:25.774 SO libspdk_jsonrpc.so.6.0 00:01:26.031 SYMLINK libspdk_jsonrpc.so 00:01:26.031 CC lib/rpc/rpc.o 00:01:26.289 LIB libspdk_rpc.a 00:01:26.289 SO libspdk_rpc.so.6.0 00:01:26.548 SYMLINK libspdk_rpc.so 00:01:26.548 CC lib/keyring/keyring.o 00:01:26.548 CC lib/notify/notify.o 00:01:26.548 CC lib/keyring/keyring_rpc.o 00:01:26.548 CC lib/notify/notify_rpc.o 00:01:26.548 CC lib/trace/trace.o 00:01:26.548 CC lib/trace/trace_flags.o 00:01:26.548 CC lib/trace/trace_rpc.o 00:01:26.805 LIB libspdk_notify.a 00:01:26.805 SO libspdk_notify.so.6.0 00:01:26.805 LIB libspdk_keyring.a 00:01:26.805 SYMLINK libspdk_notify.so 00:01:26.805 LIB libspdk_trace.a 00:01:26.805 SO libspdk_keyring.so.1.0 00:01:26.805 SO libspdk_trace.so.10.0 00:01:26.805 SYMLINK libspdk_keyring.so 00:01:26.805 SYMLINK libspdk_trace.so 00:01:27.062 LIB libspdk_env_dpdk.a 00:01:27.062 SO libspdk_env_dpdk.so.15.0 00:01:27.062 CC lib/sock/sock.o 00:01:27.062 CC lib/sock/sock_rpc.o 00:01:27.062 CC lib/thread/thread.o 00:01:27.062 CC lib/thread/iobuf.o 00:01:27.319 SYMLINK libspdk_env_dpdk.so 00:01:27.576 LIB libspdk_sock.a 00:01:27.576 SO libspdk_sock.so.10.0 00:01:27.576 SYMLINK libspdk_sock.so 00:01:27.834 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:27.834 CC lib/nvme/nvme_ctrlr.o 00:01:27.834 CC lib/nvme/nvme_fabric.o 00:01:27.834 CC lib/nvme/nvme_ns_cmd.o 00:01:27.834 CC lib/nvme/nvme_ns.o 00:01:27.834 CC lib/nvme/nvme_pcie_common.o 00:01:27.834 CC lib/nvme/nvme_pcie.o 00:01:27.834 CC lib/nvme/nvme_qpair.o 00:01:27.834 CC lib/nvme/nvme.o 00:01:27.834 CC lib/nvme/nvme_quirks.o 00:01:27.834 CC lib/nvme/nvme_transport.o 00:01:27.834 CC lib/nvme/nvme_discovery.o 00:01:27.834 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:27.834 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:27.834 CC lib/nvme/nvme_tcp.o 00:01:27.834 CC lib/nvme/nvme_opal.o 00:01:27.834 CC lib/nvme/nvme_io_msg.o 00:01:27.834 CC lib/nvme/nvme_poll_group.o 00:01:27.834 CC lib/nvme/nvme_zns.o 00:01:27.834 CC lib/nvme/nvme_stubs.o 00:01:27.834 CC lib/nvme/nvme_auth.o 00:01:27.834 CC lib/nvme/nvme_cuse.o 00:01:27.834 CC lib/nvme/nvme_vfio_user.o 00:01:27.834 CC lib/nvme/nvme_rdma.o 00:01:28.769 LIB libspdk_thread.a 00:01:28.769 SO libspdk_thread.so.10.1 00:01:28.769 SYMLINK libspdk_thread.so 00:01:28.769 CC lib/vfu_tgt/tgt_endpoint.o 00:01:28.769 CC lib/virtio/virtio.o 00:01:28.769 CC lib/accel/accel.o 00:01:28.769 CC lib/blob/blobstore.o 00:01:28.769 CC lib/vfu_tgt/tgt_rpc.o 00:01:28.769 CC lib/virtio/virtio_vhost_user.o 00:01:29.026 CC lib/accel/accel_rpc.o 00:01:29.026 CC lib/blob/request.o 00:01:29.026 CC lib/virtio/virtio_vfio_user.o 00:01:29.026 CC lib/init/json_config.o 00:01:29.026 CC lib/accel/accel_sw.o 00:01:29.026 CC lib/blob/zeroes.o 00:01:29.026 CC lib/init/subsystem.o 00:01:29.026 CC lib/virtio/virtio_pci.o 00:01:29.026 CC lib/blob/blob_bs_dev.o 00:01:29.026 CC lib/init/subsystem_rpc.o 00:01:29.026 CC lib/init/rpc.o 00:01:29.283 LIB libspdk_init.a 00:01:29.283 SO libspdk_init.so.5.0 00:01:29.283 LIB libspdk_virtio.a 00:01:29.283 LIB libspdk_vfu_tgt.a 00:01:29.283 SYMLINK libspdk_init.so 00:01:29.283 SO libspdk_virtio.so.7.0 00:01:29.283 SO libspdk_vfu_tgt.so.3.0 00:01:29.283 SYMLINK libspdk_vfu_tgt.so 00:01:29.283 SYMLINK libspdk_virtio.so 00:01:29.542 CC lib/event/app.o 00:01:29.542 CC lib/event/reactor.o 00:01:29.542 CC lib/event/log_rpc.o 00:01:29.542 CC lib/event/app_rpc.o 00:01:29.542 CC lib/event/scheduler_static.o 00:01:29.800 LIB libspdk_event.a 00:01:29.800 SO libspdk_event.so.14.0 00:01:30.058 SYMLINK libspdk_event.so 00:01:30.058 LIB libspdk_accel.a 00:01:30.058 SO libspdk_accel.so.16.0 00:01:30.058 SYMLINK libspdk_accel.so 00:01:30.058 LIB libspdk_nvme.a 00:01:30.315 SO libspdk_nvme.so.13.1 00:01:30.315 CC lib/bdev/bdev.o 00:01:30.315 CC lib/bdev/bdev_rpc.o 00:01:30.315 CC lib/bdev/bdev_zone.o 00:01:30.315 CC lib/bdev/part.o 00:01:30.315 CC lib/bdev/scsi_nvme.o 00:01:30.574 SYMLINK libspdk_nvme.so 00:01:31.948 LIB libspdk_blob.a 00:01:31.948 SO libspdk_blob.so.11.0 00:01:31.948 SYMLINK libspdk_blob.so 00:01:32.206 CC lib/blobfs/blobfs.o 00:01:32.206 CC lib/blobfs/tree.o 00:01:32.206 CC lib/lvol/lvol.o 00:01:32.772 LIB libspdk_bdev.a 00:01:32.772 SO libspdk_bdev.so.16.0 00:01:32.772 SYMLINK libspdk_bdev.so 00:01:33.051 LIB libspdk_blobfs.a 00:01:33.051 SO libspdk_blobfs.so.10.0 00:01:33.051 CC lib/scsi/dev.o 00:01:33.051 CC lib/ublk/ublk.o 00:01:33.051 CC lib/nbd/nbd.o 00:01:33.051 CC lib/scsi/lun.o 00:01:33.051 CC lib/ublk/ublk_rpc.o 00:01:33.051 CC lib/nvmf/ctrlr.o 00:01:33.051 CC lib/nbd/nbd_rpc.o 00:01:33.051 CC lib/ftl/ftl_core.o 00:01:33.051 CC lib/nvmf/ctrlr_discovery.o 00:01:33.051 CC lib/scsi/port.o 00:01:33.051 CC lib/ftl/ftl_init.o 00:01:33.051 CC lib/nvmf/ctrlr_bdev.o 00:01:33.051 CC lib/scsi/scsi.o 00:01:33.051 CC lib/ftl/ftl_layout.o 00:01:33.051 CC lib/nvmf/subsystem.o 00:01:33.051 CC lib/scsi/scsi_bdev.o 00:01:33.051 CC lib/ftl/ftl_debug.o 00:01:33.051 CC lib/scsi/scsi_pr.o 00:01:33.052 CC lib/nvmf/nvmf.o 00:01:33.052 CC lib/ftl/ftl_io.o 00:01:33.052 CC lib/nvmf/transport.o 00:01:33.052 CC lib/nvmf/nvmf_rpc.o 00:01:33.052 CC lib/ftl/ftl_sb.o 00:01:33.052 CC lib/scsi/scsi_rpc.o 00:01:33.052 CC lib/ftl/ftl_l2p.o 00:01:33.052 CC lib/scsi/task.o 00:01:33.052 CC lib/nvmf/tcp.o 00:01:33.052 CC lib/ftl/ftl_l2p_flat.o 00:01:33.052 CC lib/nvmf/stubs.o 00:01:33.052 CC lib/ftl/ftl_nv_cache.o 00:01:33.052 CC lib/nvmf/mdns_server.o 00:01:33.052 CC lib/ftl/ftl_band.o 00:01:33.052 CC lib/nvmf/vfio_user.o 00:01:33.052 CC lib/ftl/ftl_band_ops.o 00:01:33.052 CC lib/nvmf/rdma.o 00:01:33.052 CC lib/ftl/ftl_writer.o 00:01:33.052 CC lib/nvmf/auth.o 00:01:33.052 CC lib/ftl/ftl_rq.o 00:01:33.052 CC lib/ftl/ftl_reloc.o 00:01:33.052 LIB libspdk_lvol.a 00:01:33.052 CC lib/ftl/ftl_l2p_cache.o 00:01:33.052 CC lib/ftl/ftl_p2l.o 00:01:33.052 CC lib/ftl/mngt/ftl_mngt.o 00:01:33.052 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:33.052 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:33.052 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:33.052 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:33.052 SYMLINK libspdk_blobfs.so 00:01:33.052 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:33.052 SO libspdk_lvol.so.10.0 00:01:33.315 SYMLINK libspdk_lvol.so 00:01:33.315 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:33.315 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:33.315 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:33.315 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:33.579 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:33.579 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:33.579 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:33.579 CC lib/ftl/utils/ftl_conf.o 00:01:33.579 CC lib/ftl/utils/ftl_md.o 00:01:33.579 CC lib/ftl/utils/ftl_mempool.o 00:01:33.579 CC lib/ftl/utils/ftl_bitmap.o 00:01:33.579 CC lib/ftl/utils/ftl_property.o 00:01:33.579 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:33.579 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:33.579 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:33.579 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:33.579 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:33.579 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:33.579 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:33.579 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:33.579 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:33.838 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:33.838 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:33.838 CC lib/ftl/base/ftl_base_dev.o 00:01:33.838 CC lib/ftl/base/ftl_base_bdev.o 00:01:33.838 CC lib/ftl/ftl_trace.o 00:01:33.838 LIB libspdk_nbd.a 00:01:33.838 SO libspdk_nbd.so.7.0 00:01:34.096 LIB libspdk_scsi.a 00:01:34.096 SYMLINK libspdk_nbd.so 00:01:34.096 SO libspdk_scsi.so.9.0 00:01:34.096 SYMLINK libspdk_scsi.so 00:01:34.096 LIB libspdk_ublk.a 00:01:34.096 SO libspdk_ublk.so.3.0 00:01:34.354 SYMLINK libspdk_ublk.so 00:01:34.354 CC lib/vhost/vhost.o 00:01:34.354 CC lib/iscsi/conn.o 00:01:34.354 CC lib/iscsi/init_grp.o 00:01:34.354 CC lib/vhost/vhost_rpc.o 00:01:34.354 CC lib/vhost/vhost_scsi.o 00:01:34.354 CC lib/iscsi/iscsi.o 00:01:34.354 CC lib/iscsi/md5.o 00:01:34.354 CC lib/vhost/vhost_blk.o 00:01:34.354 CC lib/vhost/rte_vhost_user.o 00:01:34.354 CC lib/iscsi/param.o 00:01:34.354 CC lib/iscsi/portal_grp.o 00:01:34.354 CC lib/iscsi/tgt_node.o 00:01:34.354 CC lib/iscsi/iscsi_subsystem.o 00:01:34.354 CC lib/iscsi/iscsi_rpc.o 00:01:34.354 CC lib/iscsi/task.o 00:01:34.611 LIB libspdk_ftl.a 00:01:34.611 SO libspdk_ftl.so.9.0 00:01:35.177 SYMLINK libspdk_ftl.so 00:01:35.436 LIB libspdk_vhost.a 00:01:35.436 SO libspdk_vhost.so.8.0 00:01:35.694 LIB libspdk_nvmf.a 00:01:35.694 SYMLINK libspdk_vhost.so 00:01:35.694 SO libspdk_nvmf.so.19.0 00:01:35.694 LIB libspdk_iscsi.a 00:01:35.694 SO libspdk_iscsi.so.8.0 00:01:35.951 SYMLINK libspdk_nvmf.so 00:01:35.952 SYMLINK libspdk_iscsi.so 00:01:36.210 CC module/vfu_device/vfu_virtio.o 00:01:36.210 CC module/env_dpdk/env_dpdk_rpc.o 00:01:36.210 CC module/vfu_device/vfu_virtio_blk.o 00:01:36.210 CC module/vfu_device/vfu_virtio_scsi.o 00:01:36.210 CC module/vfu_device/vfu_virtio_rpc.o 00:01:36.210 CC module/scheduler/gscheduler/gscheduler.o 00:01:36.210 CC module/accel/error/accel_error.o 00:01:36.210 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:36.210 CC module/sock/posix/posix.o 00:01:36.210 CC module/accel/dsa/accel_dsa.o 00:01:36.210 CC module/keyring/file/keyring.o 00:01:36.210 CC module/accel/error/accel_error_rpc.o 00:01:36.210 CC module/keyring/file/keyring_rpc.o 00:01:36.210 CC module/accel/dsa/accel_dsa_rpc.o 00:01:36.210 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:36.210 CC module/accel/ioat/accel_ioat_rpc.o 00:01:36.210 CC module/accel/iaa/accel_iaa.o 00:01:36.210 CC module/accel/ioat/accel_ioat.o 00:01:36.210 CC module/keyring/linux/keyring.o 00:01:36.210 CC module/blob/bdev/blob_bdev.o 00:01:36.210 CC module/accel/iaa/accel_iaa_rpc.o 00:01:36.210 CC module/keyring/linux/keyring_rpc.o 00:01:36.210 LIB libspdk_env_dpdk_rpc.a 00:01:36.472 SO libspdk_env_dpdk_rpc.so.6.0 00:01:36.472 SYMLINK libspdk_env_dpdk_rpc.so 00:01:36.472 LIB libspdk_keyring_linux.a 00:01:36.472 LIB libspdk_keyring_file.a 00:01:36.472 LIB libspdk_scheduler_gscheduler.a 00:01:36.472 SO libspdk_keyring_linux.so.1.0 00:01:36.472 SO libspdk_scheduler_gscheduler.so.4.0 00:01:36.472 SO libspdk_keyring_file.so.1.0 00:01:36.472 LIB libspdk_accel_error.a 00:01:36.472 LIB libspdk_scheduler_dpdk_governor.a 00:01:36.472 LIB libspdk_accel_ioat.a 00:01:36.472 LIB libspdk_scheduler_dynamic.a 00:01:36.472 LIB libspdk_accel_iaa.a 00:01:36.472 SO libspdk_accel_error.so.2.0 00:01:36.472 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:36.472 SO libspdk_accel_ioat.so.6.0 00:01:36.472 SYMLINK libspdk_scheduler_gscheduler.so 00:01:36.472 SO libspdk_scheduler_dynamic.so.4.0 00:01:36.472 SYMLINK libspdk_keyring_linux.so 00:01:36.472 SYMLINK libspdk_keyring_file.so 00:01:36.472 SO libspdk_accel_iaa.so.3.0 00:01:36.472 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:36.472 LIB libspdk_accel_dsa.a 00:01:36.472 SYMLINK libspdk_accel_error.so 00:01:36.472 SYMLINK libspdk_accel_ioat.so 00:01:36.472 SYMLINK libspdk_scheduler_dynamic.so 00:01:36.472 LIB libspdk_blob_bdev.a 00:01:36.472 SYMLINK libspdk_accel_iaa.so 00:01:36.472 SO libspdk_accel_dsa.so.5.0 00:01:36.472 SO libspdk_blob_bdev.so.11.0 00:01:36.735 SYMLINK libspdk_blob_bdev.so 00:01:36.735 SYMLINK libspdk_accel_dsa.so 00:01:36.735 LIB libspdk_vfu_device.a 00:01:36.735 SO libspdk_vfu_device.so.3.0 00:01:36.993 CC module/bdev/gpt/gpt.o 00:01:36.993 CC module/bdev/null/bdev_null.o 00:01:36.993 CC module/bdev/delay/vbdev_delay.o 00:01:36.993 CC module/bdev/null/bdev_null_rpc.o 00:01:36.993 CC module/bdev/gpt/vbdev_gpt.o 00:01:36.993 CC module/bdev/lvol/vbdev_lvol.o 00:01:36.993 CC module/bdev/error/vbdev_error.o 00:01:36.993 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:36.993 CC module/bdev/error/vbdev_error_rpc.o 00:01:36.993 CC module/blobfs/bdev/blobfs_bdev.o 00:01:36.993 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:36.993 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:36.993 CC module/bdev/passthru/vbdev_passthru.o 00:01:36.993 CC module/bdev/raid/bdev_raid.o 00:01:36.993 CC module/bdev/malloc/bdev_malloc.o 00:01:36.993 CC module/bdev/raid/bdev_raid_rpc.o 00:01:36.993 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:36.993 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:36.993 CC module/bdev/raid/bdev_raid_sb.o 00:01:36.993 CC module/bdev/raid/raid0.o 00:01:36.993 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:36.993 CC module/bdev/ftl/bdev_ftl.o 00:01:36.993 CC module/bdev/raid/raid1.o 00:01:36.993 CC module/bdev/raid/concat.o 00:01:36.993 CC module/bdev/nvme/bdev_nvme.o 00:01:36.993 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:36.993 CC module/bdev/aio/bdev_aio.o 00:01:36.993 CC module/bdev/aio/bdev_aio_rpc.o 00:01:36.993 CC module/bdev/nvme/nvme_rpc.o 00:01:36.993 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:36.993 CC module/bdev/nvme/bdev_mdns_client.o 00:01:36.993 CC module/bdev/split/vbdev_split.o 00:01:36.993 CC module/bdev/nvme/vbdev_opal.o 00:01:36.993 CC module/bdev/split/vbdev_split_rpc.o 00:01:36.993 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:36.993 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:36.993 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:36.993 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:36.993 CC module/bdev/iscsi/bdev_iscsi.o 00:01:36.993 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:36.993 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:36.993 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:36.993 SYMLINK libspdk_vfu_device.so 00:01:37.251 LIB libspdk_sock_posix.a 00:01:37.251 SO libspdk_sock_posix.so.6.0 00:01:37.251 SYMLINK libspdk_sock_posix.so 00:01:37.251 LIB libspdk_blobfs_bdev.a 00:01:37.251 LIB libspdk_bdev_split.a 00:01:37.251 SO libspdk_blobfs_bdev.so.6.0 00:01:37.251 SO libspdk_bdev_split.so.6.0 00:01:37.251 LIB libspdk_bdev_passthru.a 00:01:37.251 SO libspdk_bdev_passthru.so.6.0 00:01:37.251 LIB libspdk_bdev_error.a 00:01:37.251 SYMLINK libspdk_blobfs_bdev.so 00:01:37.251 LIB libspdk_bdev_ftl.a 00:01:37.251 SYMLINK libspdk_bdev_split.so 00:01:37.251 LIB libspdk_bdev_null.a 00:01:37.251 SO libspdk_bdev_error.so.6.0 00:01:37.509 SO libspdk_bdev_ftl.so.6.0 00:01:37.509 SYMLINK libspdk_bdev_passthru.so 00:01:37.509 SO libspdk_bdev_null.so.6.0 00:01:37.509 LIB libspdk_bdev_gpt.a 00:01:37.509 SO libspdk_bdev_gpt.so.6.0 00:01:37.509 SYMLINK libspdk_bdev_error.so 00:01:37.509 SYMLINK libspdk_bdev_ftl.so 00:01:37.509 SYMLINK libspdk_bdev_null.so 00:01:37.509 LIB libspdk_bdev_aio.a 00:01:37.509 LIB libspdk_bdev_delay.a 00:01:37.509 LIB libspdk_bdev_zone_block.a 00:01:37.509 LIB libspdk_bdev_iscsi.a 00:01:37.509 SYMLINK libspdk_bdev_gpt.so 00:01:37.509 LIB libspdk_bdev_virtio.a 00:01:37.509 SO libspdk_bdev_delay.so.6.0 00:01:37.509 SO libspdk_bdev_aio.so.6.0 00:01:37.509 SO libspdk_bdev_zone_block.so.6.0 00:01:37.509 SO libspdk_bdev_iscsi.so.6.0 00:01:37.509 SO libspdk_bdev_virtio.so.6.0 00:01:37.509 LIB libspdk_bdev_malloc.a 00:01:37.509 LIB libspdk_bdev_lvol.a 00:01:37.509 SYMLINK libspdk_bdev_delay.so 00:01:37.509 SYMLINK libspdk_bdev_aio.so 00:01:37.509 SO libspdk_bdev_malloc.so.6.0 00:01:37.509 SO libspdk_bdev_lvol.so.6.0 00:01:37.509 SYMLINK libspdk_bdev_iscsi.so 00:01:37.509 SYMLINK libspdk_bdev_zone_block.so 00:01:37.509 SYMLINK libspdk_bdev_virtio.so 00:01:37.509 SYMLINK libspdk_bdev_malloc.so 00:01:37.509 SYMLINK libspdk_bdev_lvol.so 00:01:38.075 LIB libspdk_bdev_raid.a 00:01:38.075 SO libspdk_bdev_raid.so.6.0 00:01:38.075 SYMLINK libspdk_bdev_raid.so 00:01:39.449 LIB libspdk_bdev_nvme.a 00:01:39.449 SO libspdk_bdev_nvme.so.7.0 00:01:39.449 SYMLINK libspdk_bdev_nvme.so 00:01:39.707 CC module/event/subsystems/iobuf/iobuf.o 00:01:39.708 CC module/event/subsystems/sock/sock.o 00:01:39.708 CC module/event/subsystems/keyring/keyring.o 00:01:39.708 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:39.708 CC module/event/subsystems/vmd/vmd.o 00:01:39.708 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:39.708 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:39.708 CC module/event/subsystems/scheduler/scheduler.o 00:01:39.708 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:39.967 LIB libspdk_event_keyring.a 00:01:39.967 LIB libspdk_event_vhost_blk.a 00:01:39.967 LIB libspdk_event_vfu_tgt.a 00:01:39.967 LIB libspdk_event_scheduler.a 00:01:39.967 LIB libspdk_event_vmd.a 00:01:39.967 LIB libspdk_event_sock.a 00:01:39.967 LIB libspdk_event_iobuf.a 00:01:39.967 SO libspdk_event_keyring.so.1.0 00:01:39.967 SO libspdk_event_vhost_blk.so.3.0 00:01:39.967 SO libspdk_event_sock.so.5.0 00:01:39.967 SO libspdk_event_scheduler.so.4.0 00:01:39.967 SO libspdk_event_vfu_tgt.so.3.0 00:01:39.967 SO libspdk_event_vmd.so.6.0 00:01:39.967 SO libspdk_event_iobuf.so.3.0 00:01:39.967 SYMLINK libspdk_event_keyring.so 00:01:39.967 SYMLINK libspdk_event_vhost_blk.so 00:01:39.967 SYMLINK libspdk_event_sock.so 00:01:39.967 SYMLINK libspdk_event_vfu_tgt.so 00:01:39.967 SYMLINK libspdk_event_scheduler.so 00:01:39.967 SYMLINK libspdk_event_vmd.so 00:01:39.967 SYMLINK libspdk_event_iobuf.so 00:01:40.225 CC module/event/subsystems/accel/accel.o 00:01:40.484 LIB libspdk_event_accel.a 00:01:40.484 SO libspdk_event_accel.so.6.0 00:01:40.484 SYMLINK libspdk_event_accel.so 00:01:40.743 CC module/event/subsystems/bdev/bdev.o 00:01:40.743 LIB libspdk_event_bdev.a 00:01:40.743 SO libspdk_event_bdev.so.6.0 00:01:41.001 SYMLINK libspdk_event_bdev.so 00:01:41.001 CC module/event/subsystems/nbd/nbd.o 00:01:41.001 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:41.001 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:41.001 CC module/event/subsystems/ublk/ublk.o 00:01:41.001 CC module/event/subsystems/scsi/scsi.o 00:01:41.260 LIB libspdk_event_ublk.a 00:01:41.260 LIB libspdk_event_nbd.a 00:01:41.260 SO libspdk_event_nbd.so.6.0 00:01:41.260 LIB libspdk_event_scsi.a 00:01:41.260 SO libspdk_event_ublk.so.3.0 00:01:41.260 SO libspdk_event_scsi.so.6.0 00:01:41.260 SYMLINK libspdk_event_nbd.so 00:01:41.260 SYMLINK libspdk_event_ublk.so 00:01:41.260 SYMLINK libspdk_event_scsi.so 00:01:41.260 LIB libspdk_event_nvmf.a 00:01:41.260 SO libspdk_event_nvmf.so.6.0 00:01:41.260 SYMLINK libspdk_event_nvmf.so 00:01:41.517 CC module/event/subsystems/iscsi/iscsi.o 00:01:41.517 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:41.517 LIB libspdk_event_vhost_scsi.a 00:01:41.517 LIB libspdk_event_iscsi.a 00:01:41.517 SO libspdk_event_vhost_scsi.so.3.0 00:01:41.517 SO libspdk_event_iscsi.so.6.0 00:01:41.775 SYMLINK libspdk_event_vhost_scsi.so 00:01:41.775 SYMLINK libspdk_event_iscsi.so 00:01:41.775 SO libspdk.so.6.0 00:01:41.775 SYMLINK libspdk.so 00:01:42.039 CC test/rpc_client/rpc_client_test.o 00:01:42.039 CC app/trace_record/trace_record.o 00:01:42.039 CC app/spdk_lspci/spdk_lspci.o 00:01:42.039 CC app/spdk_nvme_perf/perf.o 00:01:42.039 CC app/spdk_top/spdk_top.o 00:01:42.039 CXX app/trace/trace.o 00:01:42.039 TEST_HEADER include/spdk/accel.h 00:01:42.039 TEST_HEADER include/spdk/accel_module.h 00:01:42.039 CC app/spdk_nvme_identify/identify.o 00:01:42.039 TEST_HEADER include/spdk/assert.h 00:01:42.039 TEST_HEADER include/spdk/barrier.h 00:01:42.039 TEST_HEADER include/spdk/base64.h 00:01:42.039 TEST_HEADER include/spdk/bdev.h 00:01:42.039 CC app/spdk_nvme_discover/discovery_aer.o 00:01:42.039 TEST_HEADER include/spdk/bdev_module.h 00:01:42.039 TEST_HEADER include/spdk/bdev_zone.h 00:01:42.039 TEST_HEADER include/spdk/bit_array.h 00:01:42.039 TEST_HEADER include/spdk/bit_pool.h 00:01:42.039 TEST_HEADER include/spdk/blob_bdev.h 00:01:42.039 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:42.039 TEST_HEADER include/spdk/blobfs.h 00:01:42.039 TEST_HEADER include/spdk/blob.h 00:01:42.039 TEST_HEADER include/spdk/conf.h 00:01:42.039 TEST_HEADER include/spdk/cpuset.h 00:01:42.039 TEST_HEADER include/spdk/config.h 00:01:42.039 TEST_HEADER include/spdk/crc16.h 00:01:42.039 TEST_HEADER include/spdk/crc64.h 00:01:42.039 TEST_HEADER include/spdk/crc32.h 00:01:42.039 TEST_HEADER include/spdk/dif.h 00:01:42.039 TEST_HEADER include/spdk/dma.h 00:01:42.039 TEST_HEADER include/spdk/endian.h 00:01:42.039 TEST_HEADER include/spdk/env_dpdk.h 00:01:42.039 TEST_HEADER include/spdk/env.h 00:01:42.039 TEST_HEADER include/spdk/event.h 00:01:42.039 TEST_HEADER include/spdk/fd_group.h 00:01:42.039 TEST_HEADER include/spdk/fd.h 00:01:42.039 TEST_HEADER include/spdk/file.h 00:01:42.039 TEST_HEADER include/spdk/ftl.h 00:01:42.039 TEST_HEADER include/spdk/gpt_spec.h 00:01:42.039 TEST_HEADER include/spdk/hexlify.h 00:01:42.039 TEST_HEADER include/spdk/histogram_data.h 00:01:42.039 TEST_HEADER include/spdk/idxd.h 00:01:42.039 TEST_HEADER include/spdk/idxd_spec.h 00:01:42.039 TEST_HEADER include/spdk/init.h 00:01:42.039 TEST_HEADER include/spdk/ioat.h 00:01:42.039 TEST_HEADER include/spdk/ioat_spec.h 00:01:42.039 TEST_HEADER include/spdk/iscsi_spec.h 00:01:42.039 TEST_HEADER include/spdk/json.h 00:01:42.039 TEST_HEADER include/spdk/jsonrpc.h 00:01:42.039 TEST_HEADER include/spdk/keyring.h 00:01:42.039 TEST_HEADER include/spdk/keyring_module.h 00:01:42.039 TEST_HEADER include/spdk/log.h 00:01:42.039 TEST_HEADER include/spdk/likely.h 00:01:42.039 TEST_HEADER include/spdk/lvol.h 00:01:42.039 TEST_HEADER include/spdk/memory.h 00:01:42.039 TEST_HEADER include/spdk/mmio.h 00:01:42.039 TEST_HEADER include/spdk/nbd.h 00:01:42.039 TEST_HEADER include/spdk/net.h 00:01:42.039 TEST_HEADER include/spdk/notify.h 00:01:42.040 TEST_HEADER include/spdk/nvme.h 00:01:42.040 TEST_HEADER include/spdk/nvme_intel.h 00:01:42.040 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:42.040 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:42.040 TEST_HEADER include/spdk/nvme_spec.h 00:01:42.040 TEST_HEADER include/spdk/nvme_zns.h 00:01:42.040 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:42.040 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:42.040 TEST_HEADER include/spdk/nvmf.h 00:01:42.040 TEST_HEADER include/spdk/nvmf_spec.h 00:01:42.040 TEST_HEADER include/spdk/nvmf_transport.h 00:01:42.040 TEST_HEADER include/spdk/opal_spec.h 00:01:42.040 TEST_HEADER include/spdk/opal.h 00:01:42.040 TEST_HEADER include/spdk/pci_ids.h 00:01:42.040 TEST_HEADER include/spdk/pipe.h 00:01:42.040 TEST_HEADER include/spdk/queue.h 00:01:42.040 TEST_HEADER include/spdk/reduce.h 00:01:42.040 TEST_HEADER include/spdk/rpc.h 00:01:42.040 TEST_HEADER include/spdk/scheduler.h 00:01:42.040 TEST_HEADER include/spdk/scsi.h 00:01:42.040 TEST_HEADER include/spdk/scsi_spec.h 00:01:42.040 TEST_HEADER include/spdk/stdinc.h 00:01:42.040 TEST_HEADER include/spdk/sock.h 00:01:42.040 TEST_HEADER include/spdk/string.h 00:01:42.040 TEST_HEADER include/spdk/thread.h 00:01:42.040 TEST_HEADER include/spdk/trace.h 00:01:42.040 TEST_HEADER include/spdk/trace_parser.h 00:01:42.040 TEST_HEADER include/spdk/tree.h 00:01:42.040 TEST_HEADER include/spdk/ublk.h 00:01:42.040 TEST_HEADER include/spdk/util.h 00:01:42.040 CC app/spdk_dd/spdk_dd.o 00:01:42.040 TEST_HEADER include/spdk/uuid.h 00:01:42.040 TEST_HEADER include/spdk/version.h 00:01:42.040 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:42.040 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:42.040 TEST_HEADER include/spdk/vhost.h 00:01:42.040 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:42.040 TEST_HEADER include/spdk/vmd.h 00:01:42.040 TEST_HEADER include/spdk/xor.h 00:01:42.040 TEST_HEADER include/spdk/zipf.h 00:01:42.040 CXX test/cpp_headers/accel.o 00:01:42.040 CXX test/cpp_headers/accel_module.o 00:01:42.040 CXX test/cpp_headers/assert.o 00:01:42.040 CXX test/cpp_headers/barrier.o 00:01:42.040 CXX test/cpp_headers/base64.o 00:01:42.040 CXX test/cpp_headers/bdev.o 00:01:42.040 CXX test/cpp_headers/bdev_module.o 00:01:42.040 CXX test/cpp_headers/bdev_zone.o 00:01:42.040 CXX test/cpp_headers/bit_array.o 00:01:42.040 CXX test/cpp_headers/bit_pool.o 00:01:42.040 CXX test/cpp_headers/blob_bdev.o 00:01:42.040 CXX test/cpp_headers/blobfs_bdev.o 00:01:42.040 CXX test/cpp_headers/blobfs.o 00:01:42.040 CXX test/cpp_headers/blob.o 00:01:42.040 CXX test/cpp_headers/conf.o 00:01:42.040 CXX test/cpp_headers/config.o 00:01:42.040 CXX test/cpp_headers/cpuset.o 00:01:42.040 CXX test/cpp_headers/crc16.o 00:01:42.040 CC app/nvmf_tgt/nvmf_main.o 00:01:42.040 CC app/iscsi_tgt/iscsi_tgt.o 00:01:42.040 CC examples/ioat/verify/verify.o 00:01:42.040 CXX test/cpp_headers/crc32.o 00:01:42.040 CC test/thread/poller_perf/poller_perf.o 00:01:42.040 CC test/app/stub/stub.o 00:01:42.040 CC examples/util/zipf/zipf.o 00:01:42.040 CC test/env/pci/pci_ut.o 00:01:42.040 CC test/env/vtophys/vtophys.o 00:01:42.040 CC examples/ioat/perf/perf.o 00:01:42.040 CC test/app/histogram_perf/histogram_perf.o 00:01:42.040 CC test/app/jsoncat/jsoncat.o 00:01:42.040 CC app/spdk_tgt/spdk_tgt.o 00:01:42.040 CC test/env/memory/memory_ut.o 00:01:42.040 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:42.040 CC app/fio/nvme/fio_plugin.o 00:01:42.040 CC test/app/bdev_svc/bdev_svc.o 00:01:42.301 CC test/dma/test_dma/test_dma.o 00:01:42.301 CC app/fio/bdev/fio_plugin.o 00:01:42.301 CC test/env/mem_callbacks/mem_callbacks.o 00:01:42.301 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:42.301 LINK spdk_lspci 00:01:42.301 LINK rpc_client_test 00:01:42.301 LINK spdk_nvme_discover 00:01:42.301 LINK poller_perf 00:01:42.301 LINK interrupt_tgt 00:01:42.565 LINK vtophys 00:01:42.565 LINK jsoncat 00:01:42.565 LINK histogram_perf 00:01:42.565 LINK zipf 00:01:42.565 CXX test/cpp_headers/crc64.o 00:01:42.565 LINK nvmf_tgt 00:01:42.565 CXX test/cpp_headers/dif.o 00:01:42.565 CXX test/cpp_headers/dma.o 00:01:42.565 CXX test/cpp_headers/endian.o 00:01:42.565 CXX test/cpp_headers/env_dpdk.o 00:01:42.565 CXX test/cpp_headers/env.o 00:01:42.565 CXX test/cpp_headers/event.o 00:01:42.565 CXX test/cpp_headers/fd_group.o 00:01:42.565 LINK env_dpdk_post_init 00:01:42.565 CXX test/cpp_headers/fd.o 00:01:42.565 LINK stub 00:01:42.565 CXX test/cpp_headers/file.o 00:01:42.565 CXX test/cpp_headers/ftl.o 00:01:42.565 LINK iscsi_tgt 00:01:42.565 LINK spdk_trace_record 00:01:42.565 CXX test/cpp_headers/gpt_spec.o 00:01:42.565 CXX test/cpp_headers/hexlify.o 00:01:42.565 LINK verify 00:01:42.565 CXX test/cpp_headers/histogram_data.o 00:01:42.565 LINK ioat_perf 00:01:42.565 CXX test/cpp_headers/idxd.o 00:01:42.565 CXX test/cpp_headers/idxd_spec.o 00:01:42.565 LINK bdev_svc 00:01:42.565 LINK spdk_tgt 00:01:42.565 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:42.565 CXX test/cpp_headers/init.o 00:01:42.565 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:42.565 CXX test/cpp_headers/ioat.o 00:01:42.565 CXX test/cpp_headers/ioat_spec.o 00:01:42.829 CXX test/cpp_headers/iscsi_spec.o 00:01:42.829 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:42.829 CXX test/cpp_headers/json.o 00:01:42.829 CXX test/cpp_headers/jsonrpc.o 00:01:42.829 LINK spdk_dd 00:01:42.829 CXX test/cpp_headers/keyring.o 00:01:42.829 CXX test/cpp_headers/keyring_module.o 00:01:42.829 LINK spdk_trace 00:01:42.829 CXX test/cpp_headers/likely.o 00:01:42.829 CXX test/cpp_headers/log.o 00:01:42.829 CXX test/cpp_headers/lvol.o 00:01:42.829 CXX test/cpp_headers/memory.o 00:01:42.829 LINK pci_ut 00:01:42.829 CXX test/cpp_headers/mmio.o 00:01:42.829 CXX test/cpp_headers/nbd.o 00:01:42.829 CXX test/cpp_headers/net.o 00:01:42.829 CXX test/cpp_headers/notify.o 00:01:42.829 CXX test/cpp_headers/nvme.o 00:01:42.829 CXX test/cpp_headers/nvme_intel.o 00:01:42.829 CXX test/cpp_headers/nvme_ocssd.o 00:01:42.829 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:42.829 CXX test/cpp_headers/nvme_spec.o 00:01:42.829 CXX test/cpp_headers/nvme_zns.o 00:01:43.093 CXX test/cpp_headers/nvmf_cmd.o 00:01:43.093 LINK test_dma 00:01:43.093 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:43.093 CXX test/cpp_headers/nvmf.o 00:01:43.093 CXX test/cpp_headers/nvmf_spec.o 00:01:43.093 CXX test/cpp_headers/nvmf_transport.o 00:01:43.093 CXX test/cpp_headers/opal.o 00:01:43.093 CXX test/cpp_headers/opal_spec.o 00:01:43.093 LINK nvme_fuzz 00:01:43.093 CC test/event/reactor/reactor.o 00:01:43.093 CC test/event/reactor_perf/reactor_perf.o 00:01:43.093 CC test/event/event_perf/event_perf.o 00:01:43.093 CXX test/cpp_headers/pci_ids.o 00:01:43.093 CXX test/cpp_headers/pipe.o 00:01:43.093 CC test/event/app_repeat/app_repeat.o 00:01:43.093 CXX test/cpp_headers/queue.o 00:01:43.093 CC examples/sock/hello_world/hello_sock.o 00:01:43.093 CC examples/vmd/lsvmd/lsvmd.o 00:01:43.093 CC examples/vmd/led/led.o 00:01:43.093 CC examples/idxd/perf/perf.o 00:01:43.093 LINK spdk_bdev 00:01:43.093 CXX test/cpp_headers/reduce.o 00:01:43.093 LINK spdk_nvme 00:01:43.352 CC test/event/scheduler/scheduler.o 00:01:43.352 CXX test/cpp_headers/rpc.o 00:01:43.352 CXX test/cpp_headers/scheduler.o 00:01:43.352 CXX test/cpp_headers/scsi.o 00:01:43.352 CC examples/thread/thread/thread_ex.o 00:01:43.352 CXX test/cpp_headers/scsi_spec.o 00:01:43.352 CXX test/cpp_headers/sock.o 00:01:43.352 CXX test/cpp_headers/stdinc.o 00:01:43.352 CXX test/cpp_headers/string.o 00:01:43.352 CXX test/cpp_headers/thread.o 00:01:43.352 CXX test/cpp_headers/trace.o 00:01:43.352 CXX test/cpp_headers/trace_parser.o 00:01:43.352 CXX test/cpp_headers/tree.o 00:01:43.352 CXX test/cpp_headers/ublk.o 00:01:43.352 CXX test/cpp_headers/util.o 00:01:43.352 CXX test/cpp_headers/uuid.o 00:01:43.352 CXX test/cpp_headers/version.o 00:01:43.352 LINK reactor 00:01:43.352 CXX test/cpp_headers/vfio_user_pci.o 00:01:43.352 LINK reactor_perf 00:01:43.352 CXX test/cpp_headers/vfio_user_spec.o 00:01:43.352 LINK event_perf 00:01:43.352 CXX test/cpp_headers/vhost.o 00:01:43.352 CXX test/cpp_headers/vmd.o 00:01:43.352 CXX test/cpp_headers/xor.o 00:01:43.352 CXX test/cpp_headers/zipf.o 00:01:43.352 LINK mem_callbacks 00:01:43.352 CC app/vhost/vhost.o 00:01:43.352 LINK lsvmd 00:01:43.352 LINK app_repeat 00:01:43.612 LINK led 00:01:43.612 LINK spdk_nvme_perf 00:01:43.612 LINK vhost_fuzz 00:01:43.612 LINK spdk_nvme_identify 00:01:43.612 LINK spdk_top 00:01:43.612 LINK hello_sock 00:01:43.612 LINK scheduler 00:01:43.612 CC test/nvme/aer/aer.o 00:01:43.612 CC test/nvme/startup/startup.o 00:01:43.612 CC test/nvme/err_injection/err_injection.o 00:01:43.612 CC test/nvme/overhead/overhead.o 00:01:43.612 CC test/nvme/sgl/sgl.o 00:01:43.612 CC test/nvme/reset/reset.o 00:01:43.612 CC test/nvme/e2edp/nvme_dp.o 00:01:43.612 CC test/nvme/reserve/reserve.o 00:01:43.612 CC test/nvme/simple_copy/simple_copy.o 00:01:43.870 CC test/nvme/connect_stress/connect_stress.o 00:01:43.870 CC test/blobfs/mkfs/mkfs.o 00:01:43.870 CC test/accel/dif/dif.o 00:01:43.870 LINK thread 00:01:43.870 CC test/nvme/fused_ordering/fused_ordering.o 00:01:43.870 CC test/nvme/compliance/nvme_compliance.o 00:01:43.870 CC test/nvme/boot_partition/boot_partition.o 00:01:43.870 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:43.870 CC test/nvme/cuse/cuse.o 00:01:43.870 CC test/nvme/fdp/fdp.o 00:01:43.870 CC test/lvol/esnap/esnap.o 00:01:43.870 LINK vhost 00:01:43.870 LINK idxd_perf 00:01:43.870 LINK startup 00:01:44.129 LINK reserve 00:01:44.129 LINK doorbell_aers 00:01:44.129 LINK fused_ordering 00:01:44.129 LINK connect_stress 00:01:44.129 LINK simple_copy 00:01:44.129 CC examples/nvme/hello_world/hello_world.o 00:01:44.129 LINK reset 00:01:44.129 CC examples/nvme/reconnect/reconnect.o 00:01:44.129 CC examples/nvme/hotplug/hotplug.o 00:01:44.129 LINK aer 00:01:44.129 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:44.129 LINK err_injection 00:01:44.129 CC examples/nvme/arbitration/arbitration.o 00:01:44.129 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:44.129 CC examples/nvme/abort/abort.o 00:01:44.129 LINK boot_partition 00:01:44.129 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:44.129 LINK sgl 00:01:44.129 LINK nvme_dp 00:01:44.129 LINK mkfs 00:01:44.129 LINK overhead 00:01:44.129 LINK fdp 00:01:44.129 LINK nvme_compliance 00:01:44.387 LINK memory_ut 00:01:44.387 LINK dif 00:01:44.387 CC examples/accel/perf/accel_perf.o 00:01:44.387 LINK cmb_copy 00:01:44.387 CC examples/blob/cli/blobcli.o 00:01:44.387 CC examples/blob/hello_world/hello_blob.o 00:01:44.387 LINK pmr_persistence 00:01:44.387 LINK hotplug 00:01:44.387 LINK hello_world 00:01:44.387 LINK arbitration 00:01:44.645 LINK reconnect 00:01:44.645 LINK abort 00:01:44.645 LINK hello_blob 00:01:44.645 LINK nvme_manage 00:01:44.645 CC test/bdev/bdevio/bdevio.o 00:01:44.902 LINK accel_perf 00:01:44.902 LINK blobcli 00:01:44.902 LINK iscsi_fuzz 00:01:45.161 LINK bdevio 00:01:45.161 CC examples/bdev/hello_world/hello_bdev.o 00:01:45.161 CC examples/bdev/bdevperf/bdevperf.o 00:01:45.418 LINK cuse 00:01:45.418 LINK hello_bdev 00:01:45.982 LINK bdevperf 00:01:46.239 CC examples/nvmf/nvmf/nvmf.o 00:01:46.497 LINK nvmf 00:01:49.037 LINK esnap 00:01:49.037 00:01:49.037 real 0m48.695s 00:01:49.037 user 10m6.760s 00:01:49.037 sys 2m26.178s 00:01:49.037 13:30:45 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:49.037 13:30:45 make -- common/autotest_common.sh@10 -- $ set +x 00:01:49.037 ************************************ 00:01:49.037 END TEST make 00:01:49.037 ************************************ 00:01:49.037 13:30:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:49.037 13:30:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:49.037 13:30:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:49.037 13:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.037 13:30:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:49.037 13:30:45 -- pm/common@44 -- $ pid=355539 00:01:49.037 13:30:45 -- pm/common@50 -- $ kill -TERM 355539 00:01:49.037 13:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.037 13:30:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:49.037 13:30:45 -- pm/common@44 -- $ pid=355541 00:01:49.037 13:30:45 -- pm/common@50 -- $ kill -TERM 355541 00:01:49.037 13:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.037 13:30:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:49.037 13:30:45 -- pm/common@44 -- $ pid=355543 00:01:49.037 13:30:45 -- pm/common@50 -- $ kill -TERM 355543 00:01:49.037 13:30:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.037 13:30:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:49.037 13:30:45 -- pm/common@44 -- $ pid=355571 00:01:49.037 13:30:45 -- pm/common@50 -- $ sudo -E kill -TERM 355571 00:01:49.037 13:30:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:49.037 13:30:46 -- nvmf/common.sh@7 -- # uname -s 00:01:49.037 13:30:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:49.037 13:30:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:49.037 13:30:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:49.037 13:30:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:49.037 13:30:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:49.037 13:30:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:49.037 13:30:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:49.037 13:30:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:49.037 13:30:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:49.037 13:30:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:49.037 13:30:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:49.037 13:30:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:49.037 13:30:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:49.037 13:30:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:49.037 13:30:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:49.037 13:30:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:49.037 13:30:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:49.037 13:30:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:49.037 13:30:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.037 13:30:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.037 13:30:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.037 13:30:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.037 13:30:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.037 13:30:46 -- paths/export.sh@5 -- # export PATH 00:01:49.037 13:30:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.037 13:30:46 -- nvmf/common.sh@47 -- # : 0 00:01:49.037 13:30:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:49.037 13:30:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:49.037 13:30:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:49.037 13:30:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:49.037 13:30:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:49.037 13:30:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:49.037 13:30:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:49.037 13:30:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:49.037 13:30:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:49.037 13:30:46 -- spdk/autotest.sh@32 -- # uname -s 00:01:49.037 13:30:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:49.037 13:30:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:49.037 13:30:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:49.037 13:30:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:49.037 13:30:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:49.037 13:30:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:49.037 13:30:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:49.037 13:30:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:49.037 13:30:46 -- spdk/autotest.sh@48 -- # udevadm_pid=411525 00:01:49.037 13:30:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:49.037 13:30:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:49.037 13:30:46 -- pm/common@17 -- # local monitor 00:01:49.037 13:30:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.037 13:30:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.037 13:30:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.037 13:30:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.037 13:30:46 -- pm/common@21 -- # date +%s 00:01:49.037 13:30:46 -- pm/common@21 -- # date +%s 00:01:49.037 13:30:46 -- pm/common@25 -- # sleep 1 00:01:49.037 13:30:46 -- pm/common@21 -- # date +%s 00:01:49.037 13:30:46 -- pm/common@21 -- # date +%s 00:01:49.037 13:30:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721907046 00:01:49.037 13:30:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721907046 00:01:49.037 13:30:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721907046 00:01:49.037 13:30:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721907046 00:01:49.296 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721907046_collect-vmstat.pm.log 00:01:49.296 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721907046_collect-cpu-temp.pm.log 00:01:49.296 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721907046_collect-cpu-load.pm.log 00:01:49.296 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721907046_collect-bmc-pm.bmc.pm.log 00:01:50.231 13:30:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:50.231 13:30:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:50.231 13:30:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:01:50.231 13:30:47 -- common/autotest_common.sh@10 -- # set +x 00:01:50.231 13:30:47 -- spdk/autotest.sh@59 -- # create_test_list 00:01:50.231 13:30:47 -- common/autotest_common.sh@748 -- # xtrace_disable 00:01:50.231 13:30:47 -- common/autotest_common.sh@10 -- # set +x 00:01:50.231 13:30:47 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:50.231 13:30:47 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.231 13:30:47 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.231 13:30:47 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:50.231 13:30:47 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.231 13:30:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:50.231 13:30:47 -- common/autotest_common.sh@1455 -- # uname 00:01:50.231 13:30:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:01:50.231 13:30:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:50.231 13:30:47 -- common/autotest_common.sh@1475 -- # uname 00:01:50.231 13:30:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:01:50.231 13:30:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:50.231 13:30:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:50.231 13:30:47 -- spdk/autotest.sh@72 -- # hash lcov 00:01:50.231 13:30:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:50.231 13:30:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:50.231 --rc lcov_branch_coverage=1 00:01:50.231 --rc lcov_function_coverage=1 00:01:50.231 --rc genhtml_branch_coverage=1 00:01:50.231 --rc genhtml_function_coverage=1 00:01:50.231 --rc genhtml_legend=1 00:01:50.231 --rc geninfo_all_blocks=1 00:01:50.231 ' 00:01:50.231 13:30:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:50.231 --rc lcov_branch_coverage=1 00:01:50.231 --rc lcov_function_coverage=1 00:01:50.231 --rc genhtml_branch_coverage=1 00:01:50.231 --rc genhtml_function_coverage=1 00:01:50.231 --rc genhtml_legend=1 00:01:50.231 --rc geninfo_all_blocks=1 00:01:50.231 ' 00:01:50.231 13:30:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:50.231 --rc lcov_branch_coverage=1 00:01:50.231 --rc lcov_function_coverage=1 00:01:50.231 --rc genhtml_branch_coverage=1 00:01:50.231 --rc genhtml_function_coverage=1 00:01:50.231 --rc genhtml_legend=1 00:01:50.231 --rc geninfo_all_blocks=1 00:01:50.231 --no-external' 00:01:50.231 13:30:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:50.231 --rc lcov_branch_coverage=1 00:01:50.231 --rc lcov_function_coverage=1 00:01:50.231 --rc genhtml_branch_coverage=1 00:01:50.231 --rc genhtml_function_coverage=1 00:01:50.231 --rc genhtml_legend=1 00:01:50.231 --rc geninfo_all_blocks=1 00:01:50.231 --no-external' 00:01:50.231 13:30:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:50.231 lcov: LCOV version 1.14 00:01:50.231 13:30:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:05.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:05.118 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:20.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:20.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:20.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:20.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:20.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:20.003 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:23.289 13:31:19 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:23.289 13:31:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:23.289 13:31:19 -- common/autotest_common.sh@10 -- # set +x 00:02:23.289 13:31:19 -- spdk/autotest.sh@91 -- # rm -f 00:02:23.289 13:31:19 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:24.223 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:24.223 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:24.223 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:24.223 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:24.223 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:24.223 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:24.223 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:24.223 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:24.223 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:24.223 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:24.223 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:24.223 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:24.223 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:24.223 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:24.481 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:24.481 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:24.481 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:24.481 13:31:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:24.481 13:31:21 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:24.481 13:31:21 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:24.481 13:31:21 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:24.481 13:31:21 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:24.481 13:31:21 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:24.481 13:31:21 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:24.481 13:31:21 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:24.481 13:31:21 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:24.481 13:31:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:24.481 13:31:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:24.481 13:31:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:24.481 13:31:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:24.481 13:31:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:24.481 13:31:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:24.481 No valid GPT data, bailing 00:02:24.481 13:31:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:24.481 13:31:21 -- scripts/common.sh@391 -- # pt= 00:02:24.481 13:31:21 -- scripts/common.sh@392 -- # return 1 00:02:24.481 13:31:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:24.481 1+0 records in 00:02:24.481 1+0 records out 00:02:24.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00229965 s, 456 MB/s 00:02:24.481 13:31:21 -- spdk/autotest.sh@118 -- # sync 00:02:24.481 13:31:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:24.481 13:31:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:24.481 13:31:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:27.018 13:31:23 -- spdk/autotest.sh@124 -- # uname -s 00:02:27.018 13:31:23 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:27.018 13:31:23 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:27.018 13:31:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:27.018 13:31:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:27.018 13:31:23 -- common/autotest_common.sh@10 -- # set +x 00:02:27.018 ************************************ 00:02:27.018 START TEST setup.sh 00:02:27.018 ************************************ 00:02:27.018 13:31:23 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:27.018 * Looking for test storage... 00:02:27.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:27.018 13:31:23 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:27.018 13:31:23 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:27.018 13:31:23 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:27.018 13:31:23 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:27.018 13:31:23 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:27.018 13:31:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:27.018 ************************************ 00:02:27.018 START TEST acl 00:02:27.018 ************************************ 00:02:27.018 13:31:23 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:27.018 * Looking for test storage... 00:02:27.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:27.018 13:31:23 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:27.018 13:31:23 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:27.018 13:31:23 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:27.018 13:31:23 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:27.018 13:31:23 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:27.018 13:31:23 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:27.018 13:31:23 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:27.018 13:31:23 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:27.018 13:31:23 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:27.018 13:31:23 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:27.018 13:31:23 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:27.018 13:31:23 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:27.018 13:31:23 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:27.018 13:31:23 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:27.018 13:31:23 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:27.018 13:31:23 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:28.395 13:31:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:28.395 13:31:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:28.395 13:31:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:28.395 13:31:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:28.395 13:31:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:28.395 13:31:25 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:29.328 Hugepages 00:02:29.328 node hugesize free / total 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 00:02:29.328 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.328 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:29.329 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.587 13:31:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:29.587 13:31:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:29.587 13:31:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:29.587 13:31:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:29.587 13:31:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:29.587 13:31:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.587 13:31:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:29.587 13:31:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:29.587 13:31:26 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:29.587 13:31:26 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:29.587 13:31:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:29.587 ************************************ 00:02:29.587 START TEST denied 00:02:29.587 ************************************ 00:02:29.587 13:31:26 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:02:29.587 13:31:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:29.587 13:31:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:29.587 13:31:26 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:29.587 13:31:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.587 13:31:26 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:30.961 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:30.961 13:31:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:33.513 00:02:33.513 real 0m4.083s 00:02:33.513 user 0m1.151s 00:02:33.513 sys 0m1.960s 00:02:33.513 13:31:30 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:33.514 13:31:30 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:33.514 ************************************ 00:02:33.514 END TEST denied 00:02:33.514 ************************************ 00:02:33.514 13:31:30 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:33.514 13:31:30 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:33.514 13:31:30 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:33.514 13:31:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:33.771 ************************************ 00:02:33.771 START TEST allowed 00:02:33.771 ************************************ 00:02:33.771 13:31:30 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:02:33.771 13:31:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:33.771 13:31:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:33.771 13:31:30 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:33.771 13:31:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.771 13:31:30 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:36.303 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:36.303 13:31:32 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:36.303 13:31:32 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:36.303 13:31:32 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:36.303 13:31:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:36.303 13:31:32 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:37.681 00:02:37.681 real 0m3.920s 00:02:37.681 user 0m1.012s 00:02:37.681 sys 0m1.754s 00:02:37.681 13:31:34 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:37.681 13:31:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:37.681 ************************************ 00:02:37.681 END TEST allowed 00:02:37.681 ************************************ 00:02:37.681 00:02:37.681 real 0m10.944s 00:02:37.681 user 0m3.330s 00:02:37.681 sys 0m5.562s 00:02:37.681 13:31:34 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:37.681 13:31:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:37.681 ************************************ 00:02:37.681 END TEST acl 00:02:37.681 ************************************ 00:02:37.681 13:31:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:37.681 13:31:34 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:37.681 13:31:34 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:37.681 13:31:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:37.681 ************************************ 00:02:37.681 START TEST hugepages 00:02:37.681 ************************************ 00:02:37.681 13:31:34 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:37.681 * Looking for test storage... 00:02:37.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44759364 kB' 'MemAvailable: 48225388 kB' 'Buffers: 2704 kB' 'Cached: 9287968 kB' 'SwapCached: 0 kB' 'Active: 6324000 kB' 'Inactive: 3490800 kB' 'Active(anon): 5937688 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527480 kB' 'Mapped: 204508 kB' 'Shmem: 5413560 kB' 'KReclaimable: 165504 kB' 'Slab: 484816 kB' 'SReclaimable: 165504 kB' 'SUnreclaim: 319312 kB' 'KernelStack: 12784 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 7079460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.681 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.682 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.683 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:37.684 13:31:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:37.684 13:31:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:37.684 13:31:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:37.684 13:31:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.684 ************************************ 00:02:37.684 START TEST default_setup 00:02:37.684 ************************************ 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.684 13:31:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:39.063 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:39.063 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:39.063 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:39.063 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:39.063 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:39.063 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:39.063 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:39.063 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:39.063 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:39.063 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:39.063 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:39.063 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:39.063 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:39.063 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:39.063 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:39.063 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:39.997 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:39.997 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46857648 kB' 'MemAvailable: 50323640 kB' 'Buffers: 2704 kB' 'Cached: 9288060 kB' 'SwapCached: 0 kB' 'Active: 6342140 kB' 'Inactive: 3490800 kB' 'Active(anon): 5955828 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545432 kB' 'Mapped: 204660 kB' 'Shmem: 5413652 kB' 'KReclaimable: 165440 kB' 'Slab: 484308 kB' 'SReclaimable: 165440 kB' 'SUnreclaim: 318868 kB' 'KernelStack: 12864 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7100464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.998 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.261 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:40.262 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46857396 kB' 'MemAvailable: 50323388 kB' 'Buffers: 2704 kB' 'Cached: 9288064 kB' 'SwapCached: 0 kB' 'Active: 6342268 kB' 'Inactive: 3490800 kB' 'Active(anon): 5955956 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545548 kB' 'Mapped: 204616 kB' 'Shmem: 5413656 kB' 'KReclaimable: 165440 kB' 'Slab: 484336 kB' 'SReclaimable: 165440 kB' 'SUnreclaim: 318896 kB' 'KernelStack: 12864 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7100484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.263 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.264 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46857760 kB' 'MemAvailable: 50323752 kB' 'Buffers: 2704 kB' 'Cached: 9288080 kB' 'SwapCached: 0 kB' 'Active: 6342132 kB' 'Inactive: 3490800 kB' 'Active(anon): 5955820 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545352 kB' 'Mapped: 204524 kB' 'Shmem: 5413672 kB' 'KReclaimable: 165440 kB' 'Slab: 484304 kB' 'SReclaimable: 165440 kB' 'SUnreclaim: 318864 kB' 'KernelStack: 12864 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7100504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.265 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.266 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:40.267 nr_hugepages=1024 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:40.267 resv_hugepages=0 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:40.267 surplus_hugepages=0 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:40.267 anon_hugepages=0 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46857004 kB' 'MemAvailable: 50322996 kB' 'Buffers: 2704 kB' 'Cached: 9288104 kB' 'SwapCached: 0 kB' 'Active: 6342124 kB' 'Inactive: 3490800 kB' 'Active(anon): 5955812 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545348 kB' 'Mapped: 204524 kB' 'Shmem: 5413696 kB' 'KReclaimable: 165440 kB' 'Slab: 484304 kB' 'SReclaimable: 165440 kB' 'SUnreclaim: 318864 kB' 'KernelStack: 12864 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7100528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.267 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.268 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21644836 kB' 'MemUsed: 11232104 kB' 'SwapCached: 0 kB' 'Active: 4865684 kB' 'Inactive: 3354312 kB' 'Active(anon): 4598552 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3354312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7978588 kB' 'Mapped: 143568 kB' 'AnonPages: 244544 kB' 'Shmem: 4357144 kB' 'KernelStack: 6952 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74288 kB' 'Slab: 258324 kB' 'SReclaimable: 74288 kB' 'SUnreclaim: 184036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.269 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:40.270 node0=1024 expecting 1024 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:40.270 00:02:40.270 real 0m2.476s 00:02:40.270 user 0m0.683s 00:02:40.270 sys 0m0.909s 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:40.270 13:31:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:40.270 ************************************ 00:02:40.270 END TEST default_setup 00:02:40.270 ************************************ 00:02:40.270 13:31:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:40.270 13:31:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:40.270 13:31:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:40.270 13:31:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:40.270 ************************************ 00:02:40.270 START TEST per_node_1G_alloc 00:02:40.270 ************************************ 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:40.270 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:40.271 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:40.271 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:40.271 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:40.271 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:40.271 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:40.271 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:40.271 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:40.271 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.271 13:31:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:41.651 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.651 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.651 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.651 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.651 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.651 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.651 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.651 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.652 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.652 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.652 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.652 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.652 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.652 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.652 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.652 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.652 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46853204 kB' 'MemAvailable: 50319196 kB' 'Buffers: 2704 kB' 'Cached: 9288180 kB' 'SwapCached: 0 kB' 'Active: 6342348 kB' 'Inactive: 3490800 kB' 'Active(anon): 5956036 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545484 kB' 'Mapped: 205124 kB' 'Shmem: 5413772 kB' 'KReclaimable: 165440 kB' 'Slab: 484548 kB' 'SReclaimable: 165440 kB' 'SUnreclaim: 319108 kB' 'KernelStack: 12848 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7100584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.652 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46852368 kB' 'MemAvailable: 50318360 kB' 'Buffers: 2704 kB' 'Cached: 9288184 kB' 'SwapCached: 0 kB' 'Active: 6345148 kB' 'Inactive: 3490800 kB' 'Active(anon): 5958836 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548264 kB' 'Mapped: 205048 kB' 'Shmem: 5413776 kB' 'KReclaimable: 165440 kB' 'Slab: 484548 kB' 'SReclaimable: 165440 kB' 'SUnreclaim: 319108 kB' 'KernelStack: 12848 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7104068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:41.653 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.654 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46846956 kB' 'MemAvailable: 50312948 kB' 'Buffers: 2704 kB' 'Cached: 9288200 kB' 'SwapCached: 0 kB' 'Active: 6347976 kB' 'Inactive: 3490800 kB' 'Active(anon): 5961664 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551112 kB' 'Mapped: 205324 kB' 'Shmem: 5413792 kB' 'KReclaimable: 165440 kB' 'Slab: 484516 kB' 'SReclaimable: 165440 kB' 'SUnreclaim: 319076 kB' 'KernelStack: 12880 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7106872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.655 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.656 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:41.657 nr_hugepages=1024 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.657 resv_hugepages=0 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.657 surplus_hugepages=0 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.657 anon_hugepages=0 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:41.657 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46846956 kB' 'MemAvailable: 50312948 kB' 'Buffers: 2704 kB' 'Cached: 9288200 kB' 'SwapCached: 0 kB' 'Active: 6346892 kB' 'Inactive: 3490800 kB' 'Active(anon): 5960580 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549996 kB' 'Mapped: 204952 kB' 'Shmem: 5413792 kB' 'KReclaimable: 165440 kB' 'Slab: 484516 kB' 'SReclaimable: 165440 kB' 'SUnreclaim: 319076 kB' 'KernelStack: 12880 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7105960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.658 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.918 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.918 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.918 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.918 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.918 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.919 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22695480 kB' 'MemUsed: 10181460 kB' 'SwapCached: 0 kB' 'Active: 4865776 kB' 'Inactive: 3354312 kB' 'Active(anon): 4598644 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3354312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7978660 kB' 'Mapped: 143580 kB' 'AnonPages: 244564 kB' 'Shmem: 4357216 kB' 'KernelStack: 6984 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74288 kB' 'Slab: 258400 kB' 'SReclaimable: 74288 kB' 'SUnreclaim: 184112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.920 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.921 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 24154440 kB' 'MemUsed: 3510332 kB' 'SwapCached: 0 kB' 'Active: 1476796 kB' 'Inactive: 136488 kB' 'Active(anon): 1357616 kB' 'Inactive(anon): 0 kB' 'Active(file): 119180 kB' 'Inactive(file): 136488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1312248 kB' 'Mapped: 60952 kB' 'AnonPages: 301136 kB' 'Shmem: 1056580 kB' 'KernelStack: 5912 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91152 kB' 'Slab: 226100 kB' 'SReclaimable: 91152 kB' 'SUnreclaim: 134948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.922 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:41.923 node0=512 expecting 512 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:41.923 node1=512 expecting 512 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:41.923 00:02:41.923 real 0m1.543s 00:02:41.923 user 0m0.647s 00:02:41.923 sys 0m0.862s 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:41.923 13:31:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:41.923 ************************************ 00:02:41.923 END TEST per_node_1G_alloc 00:02:41.923 ************************************ 00:02:41.923 13:31:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:41.923 13:31:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:41.923 13:31:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:41.923 13:31:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:41.923 ************************************ 00:02:41.923 START TEST even_2G_alloc 00:02:41.923 ************************************ 00:02:41.923 13:31:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:02:41.923 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:41.923 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:41.923 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:41.923 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.923 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:41.923 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:41.923 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.923 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.924 13:31:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:43.303 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.303 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.303 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.303 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.303 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.303 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.303 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.303 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.303 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.303 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:43.303 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:43.303 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:43.303 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:43.303 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:43.303 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:43.303 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:43.303 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46852520 kB' 'MemAvailable: 50318560 kB' 'Buffers: 2704 kB' 'Cached: 9288312 kB' 'SwapCached: 0 kB' 'Active: 6342760 kB' 'Inactive: 3490800 kB' 'Active(anon): 5956448 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545812 kB' 'Mapped: 204552 kB' 'Shmem: 5413904 kB' 'KReclaimable: 165536 kB' 'Slab: 484492 kB' 'SReclaimable: 165536 kB' 'SUnreclaim: 318956 kB' 'KernelStack: 12848 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7100976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.303 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.304 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46855088 kB' 'MemAvailable: 50321096 kB' 'Buffers: 2704 kB' 'Cached: 9288316 kB' 'SwapCached: 0 kB' 'Active: 6340500 kB' 'Inactive: 3490800 kB' 'Active(anon): 5954188 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543600 kB' 'Mapped: 203800 kB' 'Shmem: 5413908 kB' 'KReclaimable: 165472 kB' 'Slab: 484420 kB' 'SReclaimable: 165472 kB' 'SUnreclaim: 318948 kB' 'KernelStack: 12880 kB' 'PageTables: 7924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7085636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.305 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.306 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46855580 kB' 'MemAvailable: 50321588 kB' 'Buffers: 2704 kB' 'Cached: 9288332 kB' 'SwapCached: 0 kB' 'Active: 6339328 kB' 'Inactive: 3490800 kB' 'Active(anon): 5953016 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542348 kB' 'Mapped: 203700 kB' 'Shmem: 5413924 kB' 'KReclaimable: 165472 kB' 'Slab: 484520 kB' 'SReclaimable: 165472 kB' 'SUnreclaim: 319048 kB' 'KernelStack: 12800 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7085656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.307 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.308 nr_hugepages=1024 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.308 resv_hugepages=0 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.308 surplus_hugepages=0 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.308 anon_hugepages=0 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.308 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46855332 kB' 'MemAvailable: 50321332 kB' 'Buffers: 2704 kB' 'Cached: 9288356 kB' 'SwapCached: 0 kB' 'Active: 6339300 kB' 'Inactive: 3490800 kB' 'Active(anon): 5952988 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542308 kB' 'Mapped: 203700 kB' 'Shmem: 5413948 kB' 'KReclaimable: 165456 kB' 'Slab: 484500 kB' 'SReclaimable: 165456 kB' 'SUnreclaim: 319044 kB' 'KernelStack: 12784 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7085680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.309 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.310 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22699676 kB' 'MemUsed: 10177264 kB' 'SwapCached: 0 kB' 'Active: 4863968 kB' 'Inactive: 3354312 kB' 'Active(anon): 4596836 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3354312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7978828 kB' 'Mapped: 142992 kB' 'AnonPages: 242632 kB' 'Shmem: 4357384 kB' 'KernelStack: 6920 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74200 kB' 'Slab: 258176 kB' 'SReclaimable: 74200 kB' 'SUnreclaim: 183976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.311 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 24156264 kB' 'MemUsed: 3508508 kB' 'SwapCached: 0 kB' 'Active: 1475428 kB' 'Inactive: 136488 kB' 'Active(anon): 1356248 kB' 'Inactive(anon): 0 kB' 'Active(file): 119180 kB' 'Inactive(file): 136488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1312252 kB' 'Mapped: 60708 kB' 'AnonPages: 299752 kB' 'Shmem: 1056584 kB' 'KernelStack: 5896 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91256 kB' 'Slab: 226324 kB' 'SReclaimable: 91256 kB' 'SUnreclaim: 135068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.312 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:43.313 node0=512 expecting 512 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:43.313 node1=512 expecting 512 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:43.313 00:02:43.313 real 0m1.469s 00:02:43.313 user 0m0.614s 00:02:43.313 sys 0m0.822s 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:43.313 13:31:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:43.313 ************************************ 00:02:43.313 END TEST even_2G_alloc 00:02:43.313 ************************************ 00:02:43.313 13:31:40 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:43.313 13:31:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:43.313 13:31:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:43.313 13:31:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:43.313 ************************************ 00:02:43.313 START TEST odd_alloc 00:02:43.313 ************************************ 00:02:43.313 13:31:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.314 13:31:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.693 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.693 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.693 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.693 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.693 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.693 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.693 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.693 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.693 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.693 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.693 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.693 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.693 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.693 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.694 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.694 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.694 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46831660 kB' 'MemAvailable: 50297648 kB' 'Buffers: 2704 kB' 'Cached: 9288440 kB' 'SwapCached: 0 kB' 'Active: 6339640 kB' 'Inactive: 3490800 kB' 'Active(anon): 5953328 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542504 kB' 'Mapped: 203848 kB' 'Shmem: 5414032 kB' 'KReclaimable: 165432 kB' 'Slab: 484464 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 319032 kB' 'KernelStack: 12816 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7085748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.694 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46832528 kB' 'MemAvailable: 50298516 kB' 'Buffers: 2704 kB' 'Cached: 9288444 kB' 'SwapCached: 0 kB' 'Active: 6340280 kB' 'Inactive: 3490800 kB' 'Active(anon): 5953968 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543212 kB' 'Mapped: 203792 kB' 'Shmem: 5414036 kB' 'KReclaimable: 165432 kB' 'Slab: 484448 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 319016 kB' 'KernelStack: 12864 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7085768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.695 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46834348 kB' 'MemAvailable: 50300336 kB' 'Buffers: 2704 kB' 'Cached: 9288464 kB' 'SwapCached: 0 kB' 'Active: 6340352 kB' 'Inactive: 3490800 kB' 'Active(anon): 5954040 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543160 kB' 'Mapped: 203712 kB' 'Shmem: 5414056 kB' 'KReclaimable: 165432 kB' 'Slab: 484408 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318976 kB' 'KernelStack: 12832 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7091820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.696 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.697 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:44.698 nr_hugepages=1025 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.698 resv_hugepages=0 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.698 surplus_hugepages=0 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.698 anon_hugepages=0 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.698 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46837084 kB' 'MemAvailable: 50303072 kB' 'Buffers: 2704 kB' 'Cached: 9288484 kB' 'SwapCached: 0 kB' 'Active: 6339940 kB' 'Inactive: 3490800 kB' 'Active(anon): 5953628 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542768 kB' 'Mapped: 203712 kB' 'Shmem: 5414076 kB' 'KReclaimable: 165432 kB' 'Slab: 484408 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318976 kB' 'KernelStack: 12752 kB' 'PageTables: 7372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7085444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.959 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.960 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22687492 kB' 'MemUsed: 10189448 kB' 'SwapCached: 0 kB' 'Active: 4863588 kB' 'Inactive: 3354312 kB' 'Active(anon): 4596456 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3354312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7978952 kB' 'Mapped: 143004 kB' 'AnonPages: 242032 kB' 'Shmem: 4357508 kB' 'KernelStack: 6824 kB' 'PageTables: 3208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74192 kB' 'Slab: 258184 kB' 'SReclaimable: 74192 kB' 'SUnreclaim: 183992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.961 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.962 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 24149604 kB' 'MemUsed: 3515168 kB' 'SwapCached: 0 kB' 'Active: 1475616 kB' 'Inactive: 136488 kB' 'Active(anon): 1356436 kB' 'Inactive(anon): 0 kB' 'Active(file): 119180 kB' 'Inactive(file): 136488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1312256 kB' 'Mapped: 60708 kB' 'AnonPages: 299928 kB' 'Shmem: 1056588 kB' 'KernelStack: 5864 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91240 kB' 'Slab: 226224 kB' 'SReclaimable: 91240 kB' 'SUnreclaim: 134984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.963 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:44.964 node0=512 expecting 513 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:44.964 node1=513 expecting 512 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:44.964 00:02:44.964 real 0m1.478s 00:02:44.964 user 0m0.642s 00:02:44.964 sys 0m0.800s 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:44.964 13:31:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:44.964 ************************************ 00:02:44.964 END TEST odd_alloc 00:02:44.964 ************************************ 00:02:44.964 13:31:41 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:44.964 13:31:41 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:44.964 13:31:41 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:44.964 13:31:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:44.964 ************************************ 00:02:44.964 START TEST custom_alloc 00:02:44.964 ************************************ 00:02:44.964 13:31:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:02:44.964 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:44.964 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:44.964 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.965 13:31:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.344 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.344 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.344 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.344 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.344 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.344 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.344 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.344 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.344 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.344 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:46.344 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:46.344 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:46.344 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:46.344 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:46.344 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:46.344 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:46.344 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:46.344 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:46.344 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:46.344 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45816456 kB' 'MemAvailable: 49282444 kB' 'Buffers: 2704 kB' 'Cached: 9288580 kB' 'SwapCached: 0 kB' 'Active: 6345692 kB' 'Inactive: 3490800 kB' 'Active(anon): 5959380 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548400 kB' 'Mapped: 204684 kB' 'Shmem: 5414172 kB' 'KReclaimable: 165432 kB' 'Slab: 484180 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318748 kB' 'KernelStack: 12832 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7092132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196132 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.345 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45827124 kB' 'MemAvailable: 49293112 kB' 'Buffers: 2704 kB' 'Cached: 9288584 kB' 'SwapCached: 0 kB' 'Active: 6341568 kB' 'Inactive: 3490800 kB' 'Active(anon): 5955256 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544300 kB' 'Mapped: 204160 kB' 'Shmem: 5414176 kB' 'KReclaimable: 165432 kB' 'Slab: 484116 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318684 kB' 'KernelStack: 12832 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7088576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.346 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.347 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45821580 kB' 'MemAvailable: 49287568 kB' 'Buffers: 2704 kB' 'Cached: 9288600 kB' 'SwapCached: 0 kB' 'Active: 6345192 kB' 'Inactive: 3490800 kB' 'Active(anon): 5958880 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547912 kB' 'Mapped: 204160 kB' 'Shmem: 5414192 kB' 'KReclaimable: 165432 kB' 'Slab: 484192 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318760 kB' 'KernelStack: 12848 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7092172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.348 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.349 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:46.350 nr_hugepages=1536 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.350 resv_hugepages=0 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.350 surplus_hugepages=0 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.350 anon_hugepages=0 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.350 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45821840 kB' 'MemAvailable: 49287828 kB' 'Buffers: 2704 kB' 'Cached: 9288620 kB' 'SwapCached: 0 kB' 'Active: 6345536 kB' 'Inactive: 3490800 kB' 'Active(anon): 5959224 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548280 kB' 'Mapped: 204568 kB' 'Shmem: 5414212 kB' 'KReclaimable: 165432 kB' 'Slab: 484192 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318760 kB' 'KernelStack: 12848 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7092192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.351 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22689716 kB' 'MemUsed: 10187224 kB' 'SwapCached: 0 kB' 'Active: 4864308 kB' 'Inactive: 3354312 kB' 'Active(anon): 4597176 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3354312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7979080 kB' 'Mapped: 143016 kB' 'AnonPages: 242760 kB' 'Shmem: 4357636 kB' 'KernelStack: 6952 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74192 kB' 'Slab: 258036 kB' 'SReclaimable: 74192 kB' 'SUnreclaim: 183844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.352 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.353 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23136204 kB' 'MemUsed: 4528568 kB' 'SwapCached: 0 kB' 'Active: 1475604 kB' 'Inactive: 136488 kB' 'Active(anon): 1356424 kB' 'Inactive(anon): 0 kB' 'Active(file): 119180 kB' 'Inactive(file): 136488 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1312268 kB' 'Mapped: 60708 kB' 'AnonPages: 299888 kB' 'Shmem: 1056600 kB' 'KernelStack: 5896 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91240 kB' 'Slab: 226148 kB' 'SReclaimable: 91240 kB' 'SUnreclaim: 134908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.354 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.355 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:46.613 node0=512 expecting 512 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:46.613 node1=1024 expecting 1024 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:46.613 00:02:46.613 real 0m1.544s 00:02:46.613 user 0m0.653s 00:02:46.613 sys 0m0.857s 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:46.613 13:31:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:46.613 ************************************ 00:02:46.613 END TEST custom_alloc 00:02:46.613 ************************************ 00:02:46.613 13:31:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:46.613 13:31:43 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:46.613 13:31:43 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:46.613 13:31:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:46.613 ************************************ 00:02:46.613 START TEST no_shrink_alloc 00:02:46.613 ************************************ 00:02:46.613 13:31:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:02:46.613 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:46.613 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.614 13:31:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.994 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:47.994 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:47.994 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:47.995 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:47.995 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:47.995 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:47.995 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:47.995 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:47.995 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:47.995 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:47.995 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:47.995 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:47.995 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:47.995 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:47.995 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:47.995 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:47.995 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46856216 kB' 'MemAvailable: 50322204 kB' 'Buffers: 2704 kB' 'Cached: 9288704 kB' 'SwapCached: 0 kB' 'Active: 6340564 kB' 'Inactive: 3490800 kB' 'Active(anon): 5954252 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543180 kB' 'Mapped: 203780 kB' 'Shmem: 5414296 kB' 'KReclaimable: 165432 kB' 'Slab: 484192 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318760 kB' 'KernelStack: 12848 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7086268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.995 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.996 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46861024 kB' 'MemAvailable: 50327012 kB' 'Buffers: 2704 kB' 'Cached: 9288708 kB' 'SwapCached: 0 kB' 'Active: 6340152 kB' 'Inactive: 3490800 kB' 'Active(anon): 5953840 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542760 kB' 'Mapped: 203740 kB' 'Shmem: 5414300 kB' 'KReclaimable: 165432 kB' 'Slab: 484164 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318732 kB' 'KernelStack: 12864 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7086284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.997 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.998 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46861156 kB' 'MemAvailable: 50327144 kB' 'Buffers: 2704 kB' 'Cached: 9288728 kB' 'SwapCached: 0 kB' 'Active: 6340140 kB' 'Inactive: 3490800 kB' 'Active(anon): 5953828 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542728 kB' 'Mapped: 203740 kB' 'Shmem: 5414320 kB' 'KReclaimable: 165432 kB' 'Slab: 484224 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318792 kB' 'KernelStack: 12864 kB' 'PageTables: 7648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7086308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.999 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.000 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.001 nr_hugepages=1024 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.001 resv_hugepages=0 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.001 surplus_hugepages=0 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.001 anon_hugepages=0 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46862144 kB' 'MemAvailable: 50328132 kB' 'Buffers: 2704 kB' 'Cached: 9288748 kB' 'SwapCached: 0 kB' 'Active: 6340436 kB' 'Inactive: 3490800 kB' 'Active(anon): 5954124 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543024 kB' 'Mapped: 203764 kB' 'Shmem: 5414340 kB' 'KReclaimable: 165432 kB' 'Slab: 484224 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318792 kB' 'KernelStack: 12928 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7088688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.001 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.002 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21645948 kB' 'MemUsed: 11230992 kB' 'SwapCached: 0 kB' 'Active: 4864000 kB' 'Inactive: 3354312 kB' 'Active(anon): 4596868 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3354312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7979204 kB' 'Mapped: 143056 kB' 'AnonPages: 242296 kB' 'Shmem: 4357760 kB' 'KernelStack: 6968 kB' 'PageTables: 3244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74192 kB' 'Slab: 258048 kB' 'SReclaimable: 74192 kB' 'SUnreclaim: 183856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.003 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:48.004 node0=1024 expecting 1024 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.004 13:31:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:49.403 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.403 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:49.403 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.403 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.403 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.403 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.403 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.403 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.403 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.403 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:49.403 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:49.403 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:49.403 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:49.403 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:49.403 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:49.403 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:49.403 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:49.403 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46854160 kB' 'MemAvailable: 50320148 kB' 'Buffers: 2704 kB' 'Cached: 9288820 kB' 'SwapCached: 0 kB' 'Active: 6339704 kB' 'Inactive: 3490800 kB' 'Active(anon): 5953392 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542172 kB' 'Mapped: 203804 kB' 'Shmem: 5414412 kB' 'KReclaimable: 165432 kB' 'Slab: 484192 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318760 kB' 'KernelStack: 12816 kB' 'PageTables: 7476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7086512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.403 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.404 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46855100 kB' 'MemAvailable: 50321088 kB' 'Buffers: 2704 kB' 'Cached: 9288824 kB' 'SwapCached: 0 kB' 'Active: 6340432 kB' 'Inactive: 3490800 kB' 'Active(anon): 5954120 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542872 kB' 'Mapped: 203748 kB' 'Shmem: 5414416 kB' 'KReclaimable: 165432 kB' 'Slab: 484184 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318752 kB' 'KernelStack: 12848 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7086528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.405 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46855444 kB' 'MemAvailable: 50321432 kB' 'Buffers: 2704 kB' 'Cached: 9288824 kB' 'SwapCached: 0 kB' 'Active: 6340408 kB' 'Inactive: 3490800 kB' 'Active(anon): 5954096 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542848 kB' 'Mapped: 203748 kB' 'Shmem: 5414416 kB' 'KReclaimable: 165432 kB' 'Slab: 484204 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318772 kB' 'KernelStack: 12848 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7086552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.406 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.407 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.408 nr_hugepages=1024 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.408 resv_hugepages=0 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.408 surplus_hugepages=0 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.408 anon_hugepages=0 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.408 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46855192 kB' 'MemAvailable: 50321180 kB' 'Buffers: 2704 kB' 'Cached: 9288856 kB' 'SwapCached: 0 kB' 'Active: 6340500 kB' 'Inactive: 3490800 kB' 'Active(anon): 5954188 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542492 kB' 'Mapped: 203748 kB' 'Shmem: 5414448 kB' 'KReclaimable: 165432 kB' 'Slab: 484204 kB' 'SReclaimable: 165432 kB' 'SUnreclaim: 318772 kB' 'KernelStack: 12864 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7086572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 31488 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1300060 kB' 'DirectMap2M: 12251136 kB' 'DirectMap1G: 55574528 kB' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.409 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.410 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21646404 kB' 'MemUsed: 11230536 kB' 'SwapCached: 0 kB' 'Active: 4864032 kB' 'Inactive: 3354312 kB' 'Active(anon): 4596900 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3354312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7979308 kB' 'Mapped: 143040 kB' 'AnonPages: 242168 kB' 'Shmem: 4357864 kB' 'KernelStack: 6936 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 74192 kB' 'Slab: 258016 kB' 'SReclaimable: 74192 kB' 'SUnreclaim: 183824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.411 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:49.412 node0=1024 expecting 1024 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:49.412 00:02:49.412 real 0m2.949s 00:02:49.412 user 0m1.236s 00:02:49.412 sys 0m1.639s 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:49.412 13:31:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:49.412 ************************************ 00:02:49.412 END TEST no_shrink_alloc 00:02:49.412 ************************************ 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:49.412 13:31:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:49.412 00:02:49.412 real 0m11.871s 00:02:49.412 user 0m4.661s 00:02:49.412 sys 0m6.136s 00:02:49.412 13:31:46 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:49.412 13:31:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:49.412 ************************************ 00:02:49.412 END TEST hugepages 00:02:49.412 ************************************ 00:02:49.777 13:31:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:49.777 13:31:46 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:49.777 13:31:46 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:49.777 13:31:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:49.777 ************************************ 00:02:49.777 START TEST driver 00:02:49.777 ************************************ 00:02:49.777 13:31:46 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:49.777 * Looking for test storage... 00:02:49.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:49.777 13:31:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:49.777 13:31:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.777 13:31:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.382 13:31:49 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:52.382 13:31:49 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:52.382 13:31:49 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:52.382 13:31:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:52.382 ************************************ 00:02:52.382 START TEST guess_driver 00:02:52.382 ************************************ 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:52.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:52.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:52.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:52.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:52.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:52.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:52.382 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:52.382 Looking for driver=vfio-pci 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.382 13:31:49 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.317 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.576 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.576 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.576 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:53.577 13:31:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.514 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.514 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.514 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.514 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:54.514 13:31:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:54.514 13:31:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.514 13:31:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.799 00:02:57.799 real 0m5.021s 00:02:57.799 user 0m1.124s 00:02:57.799 sys 0m1.926s 00:02:57.799 13:31:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:57.799 13:31:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:57.799 ************************************ 00:02:57.799 END TEST guess_driver 00:02:57.799 ************************************ 00:02:57.799 00:02:57.799 real 0m7.665s 00:02:57.799 user 0m1.697s 00:02:57.799 sys 0m2.969s 00:02:57.799 13:31:54 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:02:57.799 13:31:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:57.799 ************************************ 00:02:57.799 END TEST driver 00:02:57.799 ************************************ 00:02:57.799 13:31:54 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:57.799 13:31:54 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:57.799 13:31:54 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:57.799 13:31:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:57.799 ************************************ 00:02:57.799 START TEST devices 00:02:57.799 ************************************ 00:02:57.799 13:31:54 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:57.799 * Looking for test storage... 00:02:57.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:57.799 13:31:54 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:57.799 13:31:54 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:57.799 13:31:54 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.799 13:31:54 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:58.734 13:31:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:58.734 13:31:55 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:58.734 13:31:55 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:58.734 13:31:55 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:58.734 13:31:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:58.734 13:31:55 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:58.734 13:31:55 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:58.734 13:31:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:58.734 13:31:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:58.734 13:31:55 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:58.734 No valid GPT data, bailing 00:02:58.734 13:31:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:58.734 13:31:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:58.734 13:31:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:58.734 13:31:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:58.734 13:31:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:58.734 13:31:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:58.734 13:31:55 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:58.735 13:31:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:58.735 13:31:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:58.735 13:31:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:58.735 13:31:55 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:58.735 13:31:55 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:58.735 13:31:55 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:58.735 13:31:55 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:02:58.735 13:31:55 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:02:58.735 13:31:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:58.735 ************************************ 00:02:58.735 START TEST nvme_mount 00:02:58.735 ************************************ 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:58.735 13:31:55 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:00.114 Creating new GPT entries in memory. 00:03:00.114 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:00.114 other utilities. 00:03:00.114 13:31:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:00.114 13:31:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:00.114 13:31:56 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:00.114 13:31:56 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:00.114 13:31:56 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:01.053 Creating new GPT entries in memory. 00:03:01.053 The operation has completed successfully. 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 431489 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.053 13:31:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:01.991 13:31:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:02.250 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:02.250 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:02.509 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:02.509 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:02.509 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:02.509 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:02.509 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:02.510 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:02.510 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:02.510 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:02.510 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:02.510 13:31:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:02.510 13:31:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.510 13:31:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.888 13:32:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.268 13:32:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:05.268 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:05.268 00:03:05.268 real 0m6.430s 00:03:05.268 user 0m1.535s 00:03:05.268 sys 0m2.478s 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:05.268 13:32:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:05.268 ************************************ 00:03:05.268 END TEST nvme_mount 00:03:05.268 ************************************ 00:03:05.268 13:32:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:05.268 13:32:02 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:05.268 13:32:02 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:05.268 13:32:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:05.268 ************************************ 00:03:05.268 START TEST dm_mount 00:03:05.268 ************************************ 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:05.268 13:32:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:06.203 Creating new GPT entries in memory. 00:03:06.203 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:06.203 other utilities. 00:03:06.203 13:32:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:06.203 13:32:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:06.203 13:32:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:06.203 13:32:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:06.203 13:32:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:07.582 Creating new GPT entries in memory. 00:03:07.583 The operation has completed successfully. 00:03:07.583 13:32:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:07.583 13:32:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:07.583 13:32:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:07.583 13:32:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:07.583 13:32:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:08.519 The operation has completed successfully. 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 433885 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:08.519 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.520 13:32:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:09.453 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.712 13:32:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.088 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:11.089 13:32:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:11.089 13:32:08 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:11.089 13:32:08 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:11.089 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:11.089 13:32:08 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:11.089 13:32:08 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:11.089 00:03:11.089 real 0m5.835s 00:03:11.089 user 0m0.980s 00:03:11.089 sys 0m1.739s 00:03:11.089 13:32:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:11.089 13:32:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:11.089 ************************************ 00:03:11.089 END TEST dm_mount 00:03:11.089 ************************************ 00:03:11.089 13:32:08 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:11.089 13:32:08 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:11.089 13:32:08 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:11.089 13:32:08 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:11.089 13:32:08 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:11.089 13:32:08 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:11.089 13:32:08 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:11.347 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:11.347 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:11.347 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:11.347 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:11.347 13:32:08 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:11.347 13:32:08 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.347 13:32:08 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:11.347 13:32:08 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:11.347 13:32:08 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:11.347 13:32:08 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:11.347 13:32:08 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:11.347 00:03:11.347 real 0m14.181s 00:03:11.347 user 0m3.173s 00:03:11.347 sys 0m5.236s 00:03:11.347 13:32:08 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:11.348 13:32:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:11.348 ************************************ 00:03:11.348 END TEST devices 00:03:11.348 ************************************ 00:03:11.348 00:03:11.348 real 0m44.904s 00:03:11.348 user 0m12.957s 00:03:11.348 sys 0m20.068s 00:03:11.348 13:32:08 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:11.348 13:32:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.348 ************************************ 00:03:11.348 END TEST setup.sh 00:03:11.348 ************************************ 00:03:11.606 13:32:08 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:12.542 Hugepages 00:03:12.542 node hugesize free / total 00:03:12.542 node0 1048576kB 0 / 0 00:03:12.542 node0 2048kB 2048 / 2048 00:03:12.542 node1 1048576kB 0 / 0 00:03:12.802 node1 2048kB 0 / 0 00:03:12.802 00:03:12.802 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:12.802 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:12.802 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:12.802 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:12.802 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:12.802 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:12.802 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:12.802 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:12.802 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:12.802 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:12.802 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:12.802 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:12.802 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:12.802 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:12.802 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:12.802 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:12.802 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:12.802 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:12.802 13:32:09 -- spdk/autotest.sh@130 -- # uname -s 00:03:12.802 13:32:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:12.802 13:32:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:12.802 13:32:09 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.180 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.180 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.180 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.180 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.180 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.180 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.180 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.180 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.180 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.180 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.180 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.180 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.180 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.180 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.180 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.180 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:15.118 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.118 13:32:12 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:16.495 13:32:13 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:16.495 13:32:13 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:16.495 13:32:13 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:16.495 13:32:13 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:16.495 13:32:13 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:16.495 13:32:13 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:16.495 13:32:13 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:16.495 13:32:13 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:16.495 13:32:13 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:16.495 13:32:13 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:16.495 13:32:13 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:16.495 13:32:13 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.470 Waiting for block devices as requested 00:03:17.470 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:17.728 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:17.728 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:17.728 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:17.987 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:17.987 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:17.987 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:17.987 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:18.246 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:18.246 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:18.246 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:18.504 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:18.504 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:18.504 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:18.504 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:18.762 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:18.762 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:18.762 13:32:15 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:18.762 13:32:15 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:18.762 13:32:15 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:18.762 13:32:15 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:18.762 13:32:15 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:18.762 13:32:15 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:18.762 13:32:15 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:18.762 13:32:15 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:18.762 13:32:15 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:18.762 13:32:15 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:18.762 13:32:15 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:18.762 13:32:15 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:18.762 13:32:15 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:18.762 13:32:15 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:18.762 13:32:15 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:19.019 13:32:15 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:19.019 13:32:15 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:19.019 13:32:15 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:19.019 13:32:15 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:19.019 13:32:15 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:19.019 13:32:15 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:19.019 13:32:15 -- common/autotest_common.sh@1557 -- # continue 00:03:19.019 13:32:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:19.019 13:32:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:19.019 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:19.019 13:32:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:19.019 13:32:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:19.019 13:32:15 -- common/autotest_common.sh@10 -- # set +x 00:03:19.019 13:32:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.393 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:20.393 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:20.393 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:20.393 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:20.393 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:20.393 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:20.393 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:20.393 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.393 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:20.393 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:20.393 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:20.393 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:20.393 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:20.393 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:20.393 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:20.393 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:21.328 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.328 13:32:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:21.328 13:32:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:21.328 13:32:18 -- common/autotest_common.sh@10 -- # set +x 00:03:21.328 13:32:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:21.328 13:32:18 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:21.328 13:32:18 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:21.328 13:32:18 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:21.328 13:32:18 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:21.328 13:32:18 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:21.328 13:32:18 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:21.328 13:32:18 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:21.328 13:32:18 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:21.328 13:32:18 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:21.328 13:32:18 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:21.328 13:32:18 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:21.328 13:32:18 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:21.328 13:32:18 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:21.328 13:32:18 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:21.328 13:32:18 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:21.328 13:32:18 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:21.328 13:32:18 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:21.328 13:32:18 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:21.328 13:32:18 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:21.328 13:32:18 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=439192 00:03:21.328 13:32:18 -- common/autotest_common.sh@1598 -- # waitforlisten 439192 00:03:21.328 13:32:18 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:21.328 13:32:18 -- common/autotest_common.sh@831 -- # '[' -z 439192 ']' 00:03:21.328 13:32:18 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:21.328 13:32:18 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:21.328 13:32:18 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:21.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:21.328 13:32:18 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:21.328 13:32:18 -- common/autotest_common.sh@10 -- # set +x 00:03:21.328 [2024-07-25 13:32:18.356729] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:03:21.328 [2024-07-25 13:32:18.356832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439192 ] 00:03:21.586 EAL: No free 2048 kB hugepages reported on node 1 00:03:21.586 [2024-07-25 13:32:18.413965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:21.586 [2024-07-25 13:32:18.523807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:21.843 13:32:18 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:21.843 13:32:18 -- common/autotest_common.sh@864 -- # return 0 00:03:21.843 13:32:18 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:21.843 13:32:18 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:21.843 13:32:18 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:25.123 nvme0n1 00:03:25.123 13:32:21 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:25.123 [2024-07-25 13:32:22.074638] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:25.123 [2024-07-25 13:32:22.074682] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:25.123 request: 00:03:25.123 { 00:03:25.123 "nvme_ctrlr_name": "nvme0", 00:03:25.123 "password": "test", 00:03:25.123 "method": "bdev_nvme_opal_revert", 00:03:25.123 "req_id": 1 00:03:25.123 } 00:03:25.123 Got JSON-RPC error response 00:03:25.123 response: 00:03:25.123 { 00:03:25.123 "code": -32603, 00:03:25.123 "message": "Internal error" 00:03:25.123 } 00:03:25.123 13:32:22 -- common/autotest_common.sh@1604 -- # true 00:03:25.123 13:32:22 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:25.123 13:32:22 -- common/autotest_common.sh@1608 -- # killprocess 439192 00:03:25.123 13:32:22 -- common/autotest_common.sh@950 -- # '[' -z 439192 ']' 00:03:25.123 13:32:22 -- common/autotest_common.sh@954 -- # kill -0 439192 00:03:25.123 13:32:22 -- common/autotest_common.sh@955 -- # uname 00:03:25.123 13:32:22 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:25.123 13:32:22 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 439192 00:03:25.123 13:32:22 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:25.123 13:32:22 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:25.123 13:32:22 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 439192' 00:03:25.123 killing process with pid 439192 00:03:25.123 13:32:22 -- common/autotest_common.sh@969 -- # kill 439192 00:03:25.123 13:32:22 -- common/autotest_common.sh@974 -- # wait 439192 00:03:27.020 13:32:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:27.020 13:32:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:27.020 13:32:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:27.020 13:32:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:27.020 13:32:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:27.020 13:32:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:27.020 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:03:27.020 13:32:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:27.020 13:32:23 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:27.020 13:32:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.020 13:32:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.020 13:32:23 -- common/autotest_common.sh@10 -- # set +x 00:03:27.020 ************************************ 00:03:27.020 START TEST env 00:03:27.020 ************************************ 00:03:27.020 13:32:23 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:27.020 * Looking for test storage... 00:03:27.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:27.020 13:32:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:27.020 13:32:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.020 13:32:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.020 13:32:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.020 ************************************ 00:03:27.020 START TEST env_memory 00:03:27.020 ************************************ 00:03:27.020 13:32:23 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:27.020 00:03:27.020 00:03:27.020 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.020 http://cunit.sourceforge.net/ 00:03:27.020 00:03:27.020 00:03:27.020 Suite: memory 00:03:27.020 Test: alloc and free memory map ...[2024-07-25 13:32:24.021610] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:27.020 passed 00:03:27.020 Test: mem map translation ...[2024-07-25 13:32:24.041454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:27.020 [2024-07-25 13:32:24.041475] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:27.020 [2024-07-25 13:32:24.041526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:27.020 [2024-07-25 13:32:24.041538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:27.278 passed 00:03:27.278 Test: mem map registration ...[2024-07-25 13:32:24.082092] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:27.278 [2024-07-25 13:32:24.082111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:27.278 passed 00:03:27.278 Test: mem map adjacent registrations ...passed 00:03:27.278 00:03:27.278 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.278 suites 1 1 n/a 0 0 00:03:27.278 tests 4 4 4 0 0 00:03:27.278 asserts 152 152 152 0 n/a 00:03:27.278 00:03:27.278 Elapsed time = 0.140 seconds 00:03:27.278 00:03:27.278 real 0m0.149s 00:03:27.278 user 0m0.142s 00:03:27.278 sys 0m0.007s 00:03:27.278 13:32:24 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:27.278 13:32:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:27.278 ************************************ 00:03:27.278 END TEST env_memory 00:03:27.278 ************************************ 00:03:27.278 13:32:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:27.278 13:32:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.278 13:32:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.278 13:32:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.278 ************************************ 00:03:27.278 START TEST env_vtophys 00:03:27.278 ************************************ 00:03:27.278 13:32:24 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:27.278 EAL: lib.eal log level changed from notice to debug 00:03:27.278 EAL: Detected lcore 0 as core 0 on socket 0 00:03:27.278 EAL: Detected lcore 1 as core 1 on socket 0 00:03:27.278 EAL: Detected lcore 2 as core 2 on socket 0 00:03:27.278 EAL: Detected lcore 3 as core 3 on socket 0 00:03:27.278 EAL: Detected lcore 4 as core 4 on socket 0 00:03:27.278 EAL: Detected lcore 5 as core 5 on socket 0 00:03:27.278 EAL: Detected lcore 6 as core 8 on socket 0 00:03:27.278 EAL: Detected lcore 7 as core 9 on socket 0 00:03:27.278 EAL: Detected lcore 8 as core 10 on socket 0 00:03:27.278 EAL: Detected lcore 9 as core 11 on socket 0 00:03:27.278 EAL: Detected lcore 10 as core 12 on socket 0 00:03:27.278 EAL: Detected lcore 11 as core 13 on socket 0 00:03:27.278 EAL: Detected lcore 12 as core 0 on socket 1 00:03:27.278 EAL: Detected lcore 13 as core 1 on socket 1 00:03:27.278 EAL: Detected lcore 14 as core 2 on socket 1 00:03:27.278 EAL: Detected lcore 15 as core 3 on socket 1 00:03:27.278 EAL: Detected lcore 16 as core 4 on socket 1 00:03:27.278 EAL: Detected lcore 17 as core 5 on socket 1 00:03:27.278 EAL: Detected lcore 18 as core 8 on socket 1 00:03:27.278 EAL: Detected lcore 19 as core 9 on socket 1 00:03:27.278 EAL: Detected lcore 20 as core 10 on socket 1 00:03:27.278 EAL: Detected lcore 21 as core 11 on socket 1 00:03:27.278 EAL: Detected lcore 22 as core 12 on socket 1 00:03:27.279 EAL: Detected lcore 23 as core 13 on socket 1 00:03:27.279 EAL: Detected lcore 24 as core 0 on socket 0 00:03:27.279 EAL: Detected lcore 25 as core 1 on socket 0 00:03:27.279 EAL: Detected lcore 26 as core 2 on socket 0 00:03:27.279 EAL: Detected lcore 27 as core 3 on socket 0 00:03:27.279 EAL: Detected lcore 28 as core 4 on socket 0 00:03:27.279 EAL: Detected lcore 29 as core 5 on socket 0 00:03:27.279 EAL: Detected lcore 30 as core 8 on socket 0 00:03:27.279 EAL: Detected lcore 31 as core 9 on socket 0 00:03:27.279 EAL: Detected lcore 32 as core 10 on socket 0 00:03:27.279 EAL: Detected lcore 33 as core 11 on socket 0 00:03:27.279 EAL: Detected lcore 34 as core 12 on socket 0 00:03:27.279 EAL: Detected lcore 35 as core 13 on socket 0 00:03:27.279 EAL: Detected lcore 36 as core 0 on socket 1 00:03:27.279 EAL: Detected lcore 37 as core 1 on socket 1 00:03:27.279 EAL: Detected lcore 38 as core 2 on socket 1 00:03:27.279 EAL: Detected lcore 39 as core 3 on socket 1 00:03:27.279 EAL: Detected lcore 40 as core 4 on socket 1 00:03:27.279 EAL: Detected lcore 41 as core 5 on socket 1 00:03:27.279 EAL: Detected lcore 42 as core 8 on socket 1 00:03:27.279 EAL: Detected lcore 43 as core 9 on socket 1 00:03:27.279 EAL: Detected lcore 44 as core 10 on socket 1 00:03:27.279 EAL: Detected lcore 45 as core 11 on socket 1 00:03:27.279 EAL: Detected lcore 46 as core 12 on socket 1 00:03:27.279 EAL: Detected lcore 47 as core 13 on socket 1 00:03:27.279 EAL: Maximum logical cores by configuration: 128 00:03:27.279 EAL: Detected CPU lcores: 48 00:03:27.279 EAL: Detected NUMA nodes: 2 00:03:27.279 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:27.279 EAL: Detected shared linkage of DPDK 00:03:27.279 EAL: No shared files mode enabled, IPC will be disabled 00:03:27.279 EAL: Bus pci wants IOVA as 'DC' 00:03:27.279 EAL: Buses did not request a specific IOVA mode. 00:03:27.279 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:27.279 EAL: Selected IOVA mode 'VA' 00:03:27.279 EAL: No free 2048 kB hugepages reported on node 1 00:03:27.279 EAL: Probing VFIO support... 00:03:27.279 EAL: IOMMU type 1 (Type 1) is supported 00:03:27.279 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:27.279 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:27.279 EAL: VFIO support initialized 00:03:27.279 EAL: Ask a virtual area of 0x2e000 bytes 00:03:27.279 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:27.279 EAL: Setting up physically contiguous memory... 00:03:27.279 EAL: Setting maximum number of open files to 524288 00:03:27.279 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:27.279 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:27.279 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:27.279 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.279 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:27.279 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.279 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.279 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:27.279 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:27.279 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.279 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:27.279 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.279 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.279 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:27.279 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:27.279 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.279 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:27.279 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.279 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.279 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:27.279 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:27.279 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.279 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:27.279 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.279 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.279 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:27.279 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:27.279 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:27.279 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.279 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:27.279 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.279 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.279 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:27.279 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:27.279 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.279 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:27.279 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.279 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.279 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:27.279 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:27.279 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.279 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:27.279 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.279 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.279 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:27.279 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:27.279 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.279 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:27.279 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.279 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.279 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:27.279 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:27.279 EAL: Hugepages will be freed exactly as allocated. 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: TSC frequency is ~2700000 KHz 00:03:27.279 EAL: Main lcore 0 is ready (tid=7f83e5637a00;cpuset=[0]) 00:03:27.279 EAL: Trying to obtain current memory policy. 00:03:27.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.279 EAL: Restoring previous memory policy: 0 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was expanded by 2MB 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:27.279 EAL: Mem event callback 'spdk:(nil)' registered 00:03:27.279 00:03:27.279 00:03:27.279 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.279 http://cunit.sourceforge.net/ 00:03:27.279 00:03:27.279 00:03:27.279 Suite: components_suite 00:03:27.279 Test: vtophys_malloc_test ...passed 00:03:27.279 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:27.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.279 EAL: Restoring previous memory policy: 4 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was expanded by 4MB 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was shrunk by 4MB 00:03:27.279 EAL: Trying to obtain current memory policy. 00:03:27.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.279 EAL: Restoring previous memory policy: 4 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was expanded by 6MB 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was shrunk by 6MB 00:03:27.279 EAL: Trying to obtain current memory policy. 00:03:27.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.279 EAL: Restoring previous memory policy: 4 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was expanded by 10MB 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was shrunk by 10MB 00:03:27.279 EAL: Trying to obtain current memory policy. 00:03:27.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.279 EAL: Restoring previous memory policy: 4 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was expanded by 18MB 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was shrunk by 18MB 00:03:27.279 EAL: Trying to obtain current memory policy. 00:03:27.279 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.279 EAL: Restoring previous memory policy: 4 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.279 EAL: No shared files mode enabled, IPC is disabled 00:03:27.279 EAL: Heap on socket 0 was expanded by 34MB 00:03:27.279 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.279 EAL: request: mp_malloc_sync 00:03:27.280 EAL: No shared files mode enabled, IPC is disabled 00:03:27.280 EAL: Heap on socket 0 was shrunk by 34MB 00:03:27.280 EAL: Trying to obtain current memory policy. 00:03:27.280 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.280 EAL: Restoring previous memory policy: 4 00:03:27.280 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.280 EAL: request: mp_malloc_sync 00:03:27.280 EAL: No shared files mode enabled, IPC is disabled 00:03:27.280 EAL: Heap on socket 0 was expanded by 66MB 00:03:27.280 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.537 EAL: request: mp_malloc_sync 00:03:27.537 EAL: No shared files mode enabled, IPC is disabled 00:03:27.537 EAL: Heap on socket 0 was shrunk by 66MB 00:03:27.537 EAL: Trying to obtain current memory policy. 00:03:27.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.537 EAL: Restoring previous memory policy: 4 00:03:27.537 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.537 EAL: request: mp_malloc_sync 00:03:27.537 EAL: No shared files mode enabled, IPC is disabled 00:03:27.537 EAL: Heap on socket 0 was expanded by 130MB 00:03:27.537 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.537 EAL: request: mp_malloc_sync 00:03:27.537 EAL: No shared files mode enabled, IPC is disabled 00:03:27.537 EAL: Heap on socket 0 was shrunk by 130MB 00:03:27.537 EAL: Trying to obtain current memory policy. 00:03:27.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.537 EAL: Restoring previous memory policy: 4 00:03:27.537 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.537 EAL: request: mp_malloc_sync 00:03:27.537 EAL: No shared files mode enabled, IPC is disabled 00:03:27.537 EAL: Heap on socket 0 was expanded by 258MB 00:03:27.537 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.795 EAL: request: mp_malloc_sync 00:03:27.795 EAL: No shared files mode enabled, IPC is disabled 00:03:27.795 EAL: Heap on socket 0 was shrunk by 258MB 00:03:27.795 EAL: Trying to obtain current memory policy. 00:03:27.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.795 EAL: Restoring previous memory policy: 4 00:03:27.795 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.795 EAL: request: mp_malloc_sync 00:03:27.795 EAL: No shared files mode enabled, IPC is disabled 00:03:27.795 EAL: Heap on socket 0 was expanded by 514MB 00:03:27.795 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.053 EAL: request: mp_malloc_sync 00:03:28.053 EAL: No shared files mode enabled, IPC is disabled 00:03:28.053 EAL: Heap on socket 0 was shrunk by 514MB 00:03:28.053 EAL: Trying to obtain current memory policy. 00:03:28.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.311 EAL: Restoring previous memory policy: 4 00:03:28.311 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.311 EAL: request: mp_malloc_sync 00:03:28.311 EAL: No shared files mode enabled, IPC is disabled 00:03:28.311 EAL: Heap on socket 0 was expanded by 1026MB 00:03:28.570 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.570 EAL: request: mp_malloc_sync 00:03:28.570 EAL: No shared files mode enabled, IPC is disabled 00:03:28.570 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:28.570 passed 00:03:28.570 00:03:28.570 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.570 suites 1 1 n/a 0 0 00:03:28.570 tests 2 2 2 0 0 00:03:28.570 asserts 497 497 497 0 n/a 00:03:28.570 00:03:28.570 Elapsed time = 1.307 seconds 00:03:28.570 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.570 EAL: request: mp_malloc_sync 00:03:28.570 EAL: No shared files mode enabled, IPC is disabled 00:03:28.570 EAL: Heap on socket 0 was shrunk by 2MB 00:03:28.570 EAL: No shared files mode enabled, IPC is disabled 00:03:28.570 EAL: No shared files mode enabled, IPC is disabled 00:03:28.570 EAL: No shared files mode enabled, IPC is disabled 00:03:28.570 00:03:28.570 real 0m1.419s 00:03:28.570 user 0m0.831s 00:03:28.570 sys 0m0.557s 00:03:28.570 13:32:25 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:28.570 13:32:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:28.570 ************************************ 00:03:28.570 END TEST env_vtophys 00:03:28.570 ************************************ 00:03:28.828 13:32:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:28.828 13:32:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.828 13:32:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.828 13:32:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.828 ************************************ 00:03:28.828 START TEST env_pci 00:03:28.828 ************************************ 00:03:28.828 13:32:25 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:28.828 00:03:28.828 00:03:28.828 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.828 http://cunit.sourceforge.net/ 00:03:28.828 00:03:28.829 00:03:28.829 Suite: pci 00:03:28.829 Test: pci_hook ...[2024-07-25 13:32:25.659493] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 440096 has claimed it 00:03:28.829 EAL: Cannot find device (10000:00:01.0) 00:03:28.829 EAL: Failed to attach device on primary process 00:03:28.829 passed 00:03:28.829 00:03:28.829 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.829 suites 1 1 n/a 0 0 00:03:28.829 tests 1 1 1 0 0 00:03:28.829 asserts 25 25 25 0 n/a 00:03:28.829 00:03:28.829 Elapsed time = 0.021 seconds 00:03:28.829 00:03:28.829 real 0m0.034s 00:03:28.829 user 0m0.012s 00:03:28.829 sys 0m0.021s 00:03:28.829 13:32:25 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:28.829 13:32:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:28.829 ************************************ 00:03:28.829 END TEST env_pci 00:03:28.829 ************************************ 00:03:28.829 13:32:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:28.829 13:32:25 env -- env/env.sh@15 -- # uname 00:03:28.829 13:32:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:28.829 13:32:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:28.829 13:32:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:28.829 13:32:25 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:28.829 13:32:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.829 13:32:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.829 ************************************ 00:03:28.829 START TEST env_dpdk_post_init 00:03:28.829 ************************************ 00:03:28.829 13:32:25 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:28.829 EAL: Detected CPU lcores: 48 00:03:28.829 EAL: Detected NUMA nodes: 2 00:03:28.829 EAL: Detected shared linkage of DPDK 00:03:28.829 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:28.829 EAL: Selected IOVA mode 'VA' 00:03:28.829 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.829 EAL: VFIO support initialized 00:03:28.829 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:28.829 EAL: Using IOMMU type 1 (Type 1) 00:03:28.829 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:28.829 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:28.829 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:29.088 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:30.024 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:33.303 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:33.303 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:33.303 Starting DPDK initialization... 00:03:33.303 Starting SPDK post initialization... 00:03:33.303 SPDK NVMe probe 00:03:33.303 Attaching to 0000:88:00.0 00:03:33.303 Attached to 0000:88:00.0 00:03:33.303 Cleaning up... 00:03:33.303 00:03:33.303 real 0m4.430s 00:03:33.303 user 0m3.300s 00:03:33.303 sys 0m0.190s 00:03:33.303 13:32:30 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:33.303 13:32:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:33.303 ************************************ 00:03:33.303 END TEST env_dpdk_post_init 00:03:33.303 ************************************ 00:03:33.303 13:32:30 env -- env/env.sh@26 -- # uname 00:03:33.303 13:32:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:33.303 13:32:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.303 13:32:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.303 13:32:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.303 13:32:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.303 ************************************ 00:03:33.303 START TEST env_mem_callbacks 00:03:33.303 ************************************ 00:03:33.303 13:32:30 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.303 EAL: Detected CPU lcores: 48 00:03:33.303 EAL: Detected NUMA nodes: 2 00:03:33.303 EAL: Detected shared linkage of DPDK 00:03:33.303 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:33.303 EAL: Selected IOVA mode 'VA' 00:03:33.303 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.303 EAL: VFIO support initialized 00:03:33.303 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:33.303 00:03:33.303 00:03:33.303 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.303 http://cunit.sourceforge.net/ 00:03:33.303 00:03:33.303 00:03:33.303 Suite: memory 00:03:33.303 Test: test ... 00:03:33.303 register 0x200000200000 2097152 00:03:33.303 malloc 3145728 00:03:33.303 register 0x200000400000 4194304 00:03:33.303 buf 0x200000500000 len 3145728 PASSED 00:03:33.303 malloc 64 00:03:33.303 buf 0x2000004fff40 len 64 PASSED 00:03:33.303 malloc 4194304 00:03:33.303 register 0x200000800000 6291456 00:03:33.303 buf 0x200000a00000 len 4194304 PASSED 00:03:33.303 free 0x200000500000 3145728 00:03:33.303 free 0x2000004fff40 64 00:03:33.303 unregister 0x200000400000 4194304 PASSED 00:03:33.303 free 0x200000a00000 4194304 00:03:33.303 unregister 0x200000800000 6291456 PASSED 00:03:33.303 malloc 8388608 00:03:33.303 register 0x200000400000 10485760 00:03:33.303 buf 0x200000600000 len 8388608 PASSED 00:03:33.303 free 0x200000600000 8388608 00:03:33.303 unregister 0x200000400000 10485760 PASSED 00:03:33.303 passed 00:03:33.303 00:03:33.303 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.303 suites 1 1 n/a 0 0 00:03:33.303 tests 1 1 1 0 0 00:03:33.303 asserts 15 15 15 0 n/a 00:03:33.303 00:03:33.303 Elapsed time = 0.005 seconds 00:03:33.303 00:03:33.303 real 0m0.048s 00:03:33.303 user 0m0.016s 00:03:33.303 sys 0m0.032s 00:03:33.303 13:32:30 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:33.303 13:32:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:33.303 ************************************ 00:03:33.303 END TEST env_mem_callbacks 00:03:33.303 ************************************ 00:03:33.303 00:03:33.303 real 0m6.374s 00:03:33.303 user 0m4.409s 00:03:33.303 sys 0m1.014s 00:03:33.303 13:32:30 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:33.303 13:32:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.303 ************************************ 00:03:33.303 END TEST env 00:03:33.303 ************************************ 00:03:33.303 13:32:30 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.303 13:32:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.303 13:32:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.303 13:32:30 -- common/autotest_common.sh@10 -- # set +x 00:03:33.303 ************************************ 00:03:33.303 START TEST rpc 00:03:33.303 ************************************ 00:03:33.303 13:32:30 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.561 * Looking for test storage... 00:03:33.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.561 13:32:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=440754 00:03:33.561 13:32:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:33.561 13:32:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:33.561 13:32:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 440754 00:03:33.561 13:32:30 rpc -- common/autotest_common.sh@831 -- # '[' -z 440754 ']' 00:03:33.561 13:32:30 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.561 13:32:30 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:33.561 13:32:30 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.561 13:32:30 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:33.561 13:32:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.561 [2024-07-25 13:32:30.431152] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:03:33.561 [2024-07-25 13:32:30.431234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440754 ] 00:03:33.561 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.561 [2024-07-25 13:32:30.487926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.561 [2024-07-25 13:32:30.595994] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:33.561 [2024-07-25 13:32:30.596051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 440754' to capture a snapshot of events at runtime. 00:03:33.561 [2024-07-25 13:32:30.596073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:33.561 [2024-07-25 13:32:30.596086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:33.561 [2024-07-25 13:32:30.596112] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid440754 for offline analysis/debug. 00:03:33.561 [2024-07-25 13:32:30.596149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.819 13:32:30 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:33.819 13:32:30 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:33.819 13:32:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.819 13:32:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.819 13:32:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:33.819 13:32:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:33.819 13:32:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.819 13:32:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.819 13:32:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.078 ************************************ 00:03:34.078 START TEST rpc_integrity 00:03:34.078 ************************************ 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.078 { 00:03:34.078 "name": "Malloc0", 00:03:34.078 "aliases": [ 00:03:34.078 "fc37d101-ecc1-4c8d-bb3f-2d417f0d9663" 00:03:34.078 ], 00:03:34.078 "product_name": "Malloc disk", 00:03:34.078 "block_size": 512, 00:03:34.078 "num_blocks": 16384, 00:03:34.078 "uuid": "fc37d101-ecc1-4c8d-bb3f-2d417f0d9663", 00:03:34.078 "assigned_rate_limits": { 00:03:34.078 "rw_ios_per_sec": 0, 00:03:34.078 "rw_mbytes_per_sec": 0, 00:03:34.078 "r_mbytes_per_sec": 0, 00:03:34.078 "w_mbytes_per_sec": 0 00:03:34.078 }, 00:03:34.078 "claimed": false, 00:03:34.078 "zoned": false, 00:03:34.078 "supported_io_types": { 00:03:34.078 "read": true, 00:03:34.078 "write": true, 00:03:34.078 "unmap": true, 00:03:34.078 "flush": true, 00:03:34.078 "reset": true, 00:03:34.078 "nvme_admin": false, 00:03:34.078 "nvme_io": false, 00:03:34.078 "nvme_io_md": false, 00:03:34.078 "write_zeroes": true, 00:03:34.078 "zcopy": true, 00:03:34.078 "get_zone_info": false, 00:03:34.078 "zone_management": false, 00:03:34.078 "zone_append": false, 00:03:34.078 "compare": false, 00:03:34.078 "compare_and_write": false, 00:03:34.078 "abort": true, 00:03:34.078 "seek_hole": false, 00:03:34.078 "seek_data": false, 00:03:34.078 "copy": true, 00:03:34.078 "nvme_iov_md": false 00:03:34.078 }, 00:03:34.078 "memory_domains": [ 00:03:34.078 { 00:03:34.078 "dma_device_id": "system", 00:03:34.078 "dma_device_type": 1 00:03:34.078 }, 00:03:34.078 { 00:03:34.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.078 "dma_device_type": 2 00:03:34.078 } 00:03:34.078 ], 00:03:34.078 "driver_specific": {} 00:03:34.078 } 00:03:34.078 ]' 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.078 [2024-07-25 13:32:30.967310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:34.078 [2024-07-25 13:32:30.967370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.078 [2024-07-25 13:32:30.967392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15e7d50 00:03:34.078 [2024-07-25 13:32:30.967420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.078 [2024-07-25 13:32:30.968706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.078 [2024-07-25 13:32:30.968729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.078 Passthru0 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.078 13:32:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.078 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.078 { 00:03:34.078 "name": "Malloc0", 00:03:34.078 "aliases": [ 00:03:34.078 "fc37d101-ecc1-4c8d-bb3f-2d417f0d9663" 00:03:34.078 ], 00:03:34.078 "product_name": "Malloc disk", 00:03:34.078 "block_size": 512, 00:03:34.078 "num_blocks": 16384, 00:03:34.078 "uuid": "fc37d101-ecc1-4c8d-bb3f-2d417f0d9663", 00:03:34.078 "assigned_rate_limits": { 00:03:34.078 "rw_ios_per_sec": 0, 00:03:34.078 "rw_mbytes_per_sec": 0, 00:03:34.078 "r_mbytes_per_sec": 0, 00:03:34.078 "w_mbytes_per_sec": 0 00:03:34.078 }, 00:03:34.078 "claimed": true, 00:03:34.078 "claim_type": "exclusive_write", 00:03:34.078 "zoned": false, 00:03:34.078 "supported_io_types": { 00:03:34.078 "read": true, 00:03:34.078 "write": true, 00:03:34.078 "unmap": true, 00:03:34.078 "flush": true, 00:03:34.078 "reset": true, 00:03:34.078 "nvme_admin": false, 00:03:34.078 "nvme_io": false, 00:03:34.078 "nvme_io_md": false, 00:03:34.078 "write_zeroes": true, 00:03:34.078 "zcopy": true, 00:03:34.078 "get_zone_info": false, 00:03:34.078 "zone_management": false, 00:03:34.078 "zone_append": false, 00:03:34.078 "compare": false, 00:03:34.078 "compare_and_write": false, 00:03:34.078 "abort": true, 00:03:34.078 "seek_hole": false, 00:03:34.078 "seek_data": false, 00:03:34.078 "copy": true, 00:03:34.078 "nvme_iov_md": false 00:03:34.078 }, 00:03:34.078 "memory_domains": [ 00:03:34.078 { 00:03:34.078 "dma_device_id": "system", 00:03:34.078 "dma_device_type": 1 00:03:34.078 }, 00:03:34.078 { 00:03:34.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.078 "dma_device_type": 2 00:03:34.078 } 00:03:34.078 ], 00:03:34.078 "driver_specific": {} 00:03:34.078 }, 00:03:34.078 { 00:03:34.078 "name": "Passthru0", 00:03:34.078 "aliases": [ 00:03:34.078 "b8545e7c-95f0-5362-8146-7b4dbc096e8c" 00:03:34.078 ], 00:03:34.078 "product_name": "passthru", 00:03:34.078 "block_size": 512, 00:03:34.078 "num_blocks": 16384, 00:03:34.078 "uuid": "b8545e7c-95f0-5362-8146-7b4dbc096e8c", 00:03:34.078 "assigned_rate_limits": { 00:03:34.078 "rw_ios_per_sec": 0, 00:03:34.079 "rw_mbytes_per_sec": 0, 00:03:34.079 "r_mbytes_per_sec": 0, 00:03:34.079 "w_mbytes_per_sec": 0 00:03:34.079 }, 00:03:34.079 "claimed": false, 00:03:34.079 "zoned": false, 00:03:34.079 "supported_io_types": { 00:03:34.079 "read": true, 00:03:34.079 "write": true, 00:03:34.079 "unmap": true, 00:03:34.079 "flush": true, 00:03:34.079 "reset": true, 00:03:34.079 "nvme_admin": false, 00:03:34.079 "nvme_io": false, 00:03:34.079 "nvme_io_md": false, 00:03:34.079 "write_zeroes": true, 00:03:34.079 "zcopy": true, 00:03:34.079 "get_zone_info": false, 00:03:34.079 "zone_management": false, 00:03:34.079 "zone_append": false, 00:03:34.079 "compare": false, 00:03:34.079 "compare_and_write": false, 00:03:34.079 "abort": true, 00:03:34.079 "seek_hole": false, 00:03:34.079 "seek_data": false, 00:03:34.079 "copy": true, 00:03:34.079 "nvme_iov_md": false 00:03:34.079 }, 00:03:34.079 "memory_domains": [ 00:03:34.079 { 00:03:34.079 "dma_device_id": "system", 00:03:34.079 "dma_device_type": 1 00:03:34.079 }, 00:03:34.079 { 00:03:34.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.079 "dma_device_type": 2 00:03:34.079 } 00:03:34.079 ], 00:03:34.079 "driver_specific": { 00:03:34.079 "passthru": { 00:03:34.079 "name": "Passthru0", 00:03:34.079 "base_bdev_name": "Malloc0" 00:03:34.079 } 00:03:34.079 } 00:03:34.079 } 00:03:34.079 ]' 00:03:34.079 13:32:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:34.079 13:32:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.079 13:32:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.079 13:32:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.079 13:32:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.079 13:32:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.079 13:32:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:34.079 13:32:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.079 00:03:34.079 real 0m0.212s 00:03:34.079 user 0m0.133s 00:03:34.079 sys 0m0.019s 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.079 13:32:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.079 ************************************ 00:03:34.079 END TEST rpc_integrity 00:03:34.079 ************************************ 00:03:34.079 13:32:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:34.079 13:32:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.079 13:32:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.079 13:32:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.336 ************************************ 00:03:34.336 START TEST rpc_plugins 00:03:34.336 ************************************ 00:03:34.336 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:34.336 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:34.336 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.336 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.336 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.336 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:34.336 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:34.336 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.336 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.336 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.336 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:34.336 { 00:03:34.336 "name": "Malloc1", 00:03:34.336 "aliases": [ 00:03:34.337 "13de9aea-2302-475e-af9c-268b16ba58f0" 00:03:34.337 ], 00:03:34.337 "product_name": "Malloc disk", 00:03:34.337 "block_size": 4096, 00:03:34.337 "num_blocks": 256, 00:03:34.337 "uuid": "13de9aea-2302-475e-af9c-268b16ba58f0", 00:03:34.337 "assigned_rate_limits": { 00:03:34.337 "rw_ios_per_sec": 0, 00:03:34.337 "rw_mbytes_per_sec": 0, 00:03:34.337 "r_mbytes_per_sec": 0, 00:03:34.337 "w_mbytes_per_sec": 0 00:03:34.337 }, 00:03:34.337 "claimed": false, 00:03:34.337 "zoned": false, 00:03:34.337 "supported_io_types": { 00:03:34.337 "read": true, 00:03:34.337 "write": true, 00:03:34.337 "unmap": true, 00:03:34.337 "flush": true, 00:03:34.337 "reset": true, 00:03:34.337 "nvme_admin": false, 00:03:34.337 "nvme_io": false, 00:03:34.337 "nvme_io_md": false, 00:03:34.337 "write_zeroes": true, 00:03:34.337 "zcopy": true, 00:03:34.337 "get_zone_info": false, 00:03:34.337 "zone_management": false, 00:03:34.337 "zone_append": false, 00:03:34.337 "compare": false, 00:03:34.337 "compare_and_write": false, 00:03:34.337 "abort": true, 00:03:34.337 "seek_hole": false, 00:03:34.337 "seek_data": false, 00:03:34.337 "copy": true, 00:03:34.337 "nvme_iov_md": false 00:03:34.337 }, 00:03:34.337 "memory_domains": [ 00:03:34.337 { 00:03:34.337 "dma_device_id": "system", 00:03:34.337 "dma_device_type": 1 00:03:34.337 }, 00:03:34.337 { 00:03:34.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.337 "dma_device_type": 2 00:03:34.337 } 00:03:34.337 ], 00:03:34.337 "driver_specific": {} 00:03:34.337 } 00:03:34.337 ]' 00:03:34.337 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:34.337 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:34.337 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:34.337 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.337 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.337 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.337 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:34.337 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.337 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.337 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.337 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:34.337 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:34.337 13:32:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:34.337 00:03:34.337 real 0m0.113s 00:03:34.337 user 0m0.072s 00:03:34.337 sys 0m0.011s 00:03:34.337 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.337 13:32:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.337 ************************************ 00:03:34.337 END TEST rpc_plugins 00:03:34.337 ************************************ 00:03:34.337 13:32:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:34.337 13:32:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.337 13:32:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.337 13:32:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.337 ************************************ 00:03:34.337 START TEST rpc_trace_cmd_test 00:03:34.337 ************************************ 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:34.337 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid440754", 00:03:34.337 "tpoint_group_mask": "0x8", 00:03:34.337 "iscsi_conn": { 00:03:34.337 "mask": "0x2", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "scsi": { 00:03:34.337 "mask": "0x4", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "bdev": { 00:03:34.337 "mask": "0x8", 00:03:34.337 "tpoint_mask": "0xffffffffffffffff" 00:03:34.337 }, 00:03:34.337 "nvmf_rdma": { 00:03:34.337 "mask": "0x10", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "nvmf_tcp": { 00:03:34.337 "mask": "0x20", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "ftl": { 00:03:34.337 "mask": "0x40", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "blobfs": { 00:03:34.337 "mask": "0x80", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "dsa": { 00:03:34.337 "mask": "0x200", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "thread": { 00:03:34.337 "mask": "0x400", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "nvme_pcie": { 00:03:34.337 "mask": "0x800", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "iaa": { 00:03:34.337 "mask": "0x1000", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "nvme_tcp": { 00:03:34.337 "mask": "0x2000", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "bdev_nvme": { 00:03:34.337 "mask": "0x4000", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 }, 00:03:34.337 "sock": { 00:03:34.337 "mask": "0x8000", 00:03:34.337 "tpoint_mask": "0x0" 00:03:34.337 } 00:03:34.337 }' 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:34.337 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:34.600 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:34.600 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:34.600 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:34.600 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:34.600 13:32:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:34.600 00:03:34.600 real 0m0.184s 00:03:34.600 user 0m0.164s 00:03:34.600 sys 0m0.012s 00:03:34.600 13:32:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.600 13:32:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.600 ************************************ 00:03:34.600 END TEST rpc_trace_cmd_test 00:03:34.600 ************************************ 00:03:34.600 13:32:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:34.600 13:32:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:34.600 13:32:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:34.600 13:32:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.600 13:32:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.600 13:32:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.600 ************************************ 00:03:34.600 START TEST rpc_daemon_integrity 00:03:34.600 ************************************ 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.600 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.600 { 00:03:34.600 "name": "Malloc2", 00:03:34.601 "aliases": [ 00:03:34.601 "e82b668b-12be-4cd6-bd59-b80a1fe9f280" 00:03:34.601 ], 00:03:34.601 "product_name": "Malloc disk", 00:03:34.601 "block_size": 512, 00:03:34.601 "num_blocks": 16384, 00:03:34.601 "uuid": "e82b668b-12be-4cd6-bd59-b80a1fe9f280", 00:03:34.601 "assigned_rate_limits": { 00:03:34.601 "rw_ios_per_sec": 0, 00:03:34.601 "rw_mbytes_per_sec": 0, 00:03:34.601 "r_mbytes_per_sec": 0, 00:03:34.601 "w_mbytes_per_sec": 0 00:03:34.601 }, 00:03:34.601 "claimed": false, 00:03:34.601 "zoned": false, 00:03:34.601 "supported_io_types": { 00:03:34.601 "read": true, 00:03:34.601 "write": true, 00:03:34.601 "unmap": true, 00:03:34.601 "flush": true, 00:03:34.601 "reset": true, 00:03:34.601 "nvme_admin": false, 00:03:34.601 "nvme_io": false, 00:03:34.601 "nvme_io_md": false, 00:03:34.601 "write_zeroes": true, 00:03:34.601 "zcopy": true, 00:03:34.601 "get_zone_info": false, 00:03:34.601 "zone_management": false, 00:03:34.601 "zone_append": false, 00:03:34.601 "compare": false, 00:03:34.601 "compare_and_write": false, 00:03:34.601 "abort": true, 00:03:34.601 "seek_hole": false, 00:03:34.601 "seek_data": false, 00:03:34.601 "copy": true, 00:03:34.601 "nvme_iov_md": false 00:03:34.601 }, 00:03:34.601 "memory_domains": [ 00:03:34.601 { 00:03:34.601 "dma_device_id": "system", 00:03:34.601 "dma_device_type": 1 00:03:34.601 }, 00:03:34.601 { 00:03:34.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.601 "dma_device_type": 2 00:03:34.601 } 00:03:34.601 ], 00:03:34.601 "driver_specific": {} 00:03:34.601 } 00:03:34.601 ]' 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.601 [2024-07-25 13:32:31.609152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:34.601 [2024-07-25 13:32:31.609194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.601 [2024-07-25 13:32:31.609216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15e8c00 00:03:34.601 [2024-07-25 13:32:31.609236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.601 [2024-07-25 13:32:31.610445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.601 [2024-07-25 13:32:31.610467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.601 Passthru0 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.601 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.601 { 00:03:34.601 "name": "Malloc2", 00:03:34.601 "aliases": [ 00:03:34.601 "e82b668b-12be-4cd6-bd59-b80a1fe9f280" 00:03:34.601 ], 00:03:34.601 "product_name": "Malloc disk", 00:03:34.601 "block_size": 512, 00:03:34.601 "num_blocks": 16384, 00:03:34.601 "uuid": "e82b668b-12be-4cd6-bd59-b80a1fe9f280", 00:03:34.601 "assigned_rate_limits": { 00:03:34.601 "rw_ios_per_sec": 0, 00:03:34.601 "rw_mbytes_per_sec": 0, 00:03:34.601 "r_mbytes_per_sec": 0, 00:03:34.601 "w_mbytes_per_sec": 0 00:03:34.601 }, 00:03:34.601 "claimed": true, 00:03:34.601 "claim_type": "exclusive_write", 00:03:34.601 "zoned": false, 00:03:34.601 "supported_io_types": { 00:03:34.601 "read": true, 00:03:34.601 "write": true, 00:03:34.601 "unmap": true, 00:03:34.601 "flush": true, 00:03:34.601 "reset": true, 00:03:34.601 "nvme_admin": false, 00:03:34.601 "nvme_io": false, 00:03:34.601 "nvme_io_md": false, 00:03:34.601 "write_zeroes": true, 00:03:34.601 "zcopy": true, 00:03:34.601 "get_zone_info": false, 00:03:34.601 "zone_management": false, 00:03:34.601 "zone_append": false, 00:03:34.601 "compare": false, 00:03:34.601 "compare_and_write": false, 00:03:34.601 "abort": true, 00:03:34.601 "seek_hole": false, 00:03:34.601 "seek_data": false, 00:03:34.601 "copy": true, 00:03:34.601 "nvme_iov_md": false 00:03:34.601 }, 00:03:34.601 "memory_domains": [ 00:03:34.601 { 00:03:34.601 "dma_device_id": "system", 00:03:34.601 "dma_device_type": 1 00:03:34.601 }, 00:03:34.601 { 00:03:34.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.601 "dma_device_type": 2 00:03:34.601 } 00:03:34.601 ], 00:03:34.601 "driver_specific": {} 00:03:34.601 }, 00:03:34.601 { 00:03:34.601 "name": "Passthru0", 00:03:34.601 "aliases": [ 00:03:34.601 "8b7bccee-8636-5227-b612-e8dff841036d" 00:03:34.601 ], 00:03:34.601 "product_name": "passthru", 00:03:34.601 "block_size": 512, 00:03:34.601 "num_blocks": 16384, 00:03:34.601 "uuid": "8b7bccee-8636-5227-b612-e8dff841036d", 00:03:34.601 "assigned_rate_limits": { 00:03:34.601 "rw_ios_per_sec": 0, 00:03:34.601 "rw_mbytes_per_sec": 0, 00:03:34.601 "r_mbytes_per_sec": 0, 00:03:34.601 "w_mbytes_per_sec": 0 00:03:34.601 }, 00:03:34.602 "claimed": false, 00:03:34.602 "zoned": false, 00:03:34.602 "supported_io_types": { 00:03:34.602 "read": true, 00:03:34.602 "write": true, 00:03:34.602 "unmap": true, 00:03:34.602 "flush": true, 00:03:34.602 "reset": true, 00:03:34.602 "nvme_admin": false, 00:03:34.602 "nvme_io": false, 00:03:34.602 "nvme_io_md": false, 00:03:34.602 "write_zeroes": true, 00:03:34.602 "zcopy": true, 00:03:34.602 "get_zone_info": false, 00:03:34.602 "zone_management": false, 00:03:34.602 "zone_append": false, 00:03:34.602 "compare": false, 00:03:34.602 "compare_and_write": false, 00:03:34.602 "abort": true, 00:03:34.602 "seek_hole": false, 00:03:34.602 "seek_data": false, 00:03:34.602 "copy": true, 00:03:34.602 "nvme_iov_md": false 00:03:34.602 }, 00:03:34.602 "memory_domains": [ 00:03:34.602 { 00:03:34.602 "dma_device_id": "system", 00:03:34.602 "dma_device_type": 1 00:03:34.602 }, 00:03:34.602 { 00:03:34.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.602 "dma_device_type": 2 00:03:34.602 } 00:03:34.602 ], 00:03:34.602 "driver_specific": { 00:03:34.602 "passthru": { 00:03:34.602 "name": "Passthru0", 00:03:34.602 "base_bdev_name": "Malloc2" 00:03:34.602 } 00:03:34.602 } 00:03:34.602 } 00:03:34.602 ]' 00:03:34.602 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.859 00:03:34.859 real 0m0.213s 00:03:34.859 user 0m0.139s 00:03:34.859 sys 0m0.018s 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.859 13:32:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.859 ************************************ 00:03:34.859 END TEST rpc_daemon_integrity 00:03:34.859 ************************************ 00:03:34.859 13:32:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:34.859 13:32:31 rpc -- rpc/rpc.sh@84 -- # killprocess 440754 00:03:34.859 13:32:31 rpc -- common/autotest_common.sh@950 -- # '[' -z 440754 ']' 00:03:34.859 13:32:31 rpc -- common/autotest_common.sh@954 -- # kill -0 440754 00:03:34.859 13:32:31 rpc -- common/autotest_common.sh@955 -- # uname 00:03:34.859 13:32:31 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:34.859 13:32:31 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 440754 00:03:34.859 13:32:31 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:34.860 13:32:31 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:34.860 13:32:31 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 440754' 00:03:34.860 killing process with pid 440754 00:03:34.860 13:32:31 rpc -- common/autotest_common.sh@969 -- # kill 440754 00:03:34.860 13:32:31 rpc -- common/autotest_common.sh@974 -- # wait 440754 00:03:35.492 00:03:35.492 real 0m1.853s 00:03:35.492 user 0m2.306s 00:03:35.492 sys 0m0.547s 00:03:35.492 13:32:32 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:35.492 13:32:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.492 ************************************ 00:03:35.492 END TEST rpc 00:03:35.492 ************************************ 00:03:35.493 13:32:32 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.493 13:32:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:35.493 13:32:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:35.493 13:32:32 -- common/autotest_common.sh@10 -- # set +x 00:03:35.493 ************************************ 00:03:35.493 START TEST skip_rpc 00:03:35.493 ************************************ 00:03:35.493 13:32:32 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.493 * Looking for test storage... 00:03:35.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.493 13:32:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.493 13:32:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.493 13:32:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:35.493 13:32:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:35.493 13:32:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:35.493 13:32:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.493 ************************************ 00:03:35.493 START TEST skip_rpc 00:03:35.493 ************************************ 00:03:35.493 13:32:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:35.493 13:32:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=441185 00:03:35.493 13:32:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:35.493 13:32:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:35.493 13:32:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:35.493 [2024-07-25 13:32:32.364949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:03:35.493 [2024-07-25 13:32:32.365012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441185 ] 00:03:35.493 EAL: No free 2048 kB hugepages reported on node 1 00:03:35.493 [2024-07-25 13:32:32.420609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.775 [2024-07-25 13:32:32.533302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 441185 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 441185 ']' 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 441185 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 441185 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:41.047 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:41.048 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 441185' 00:03:41.048 killing process with pid 441185 00:03:41.048 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 441185 00:03:41.048 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 441185 00:03:41.048 00:03:41.048 real 0m5.458s 00:03:41.048 user 0m5.158s 00:03:41.048 sys 0m0.307s 00:03:41.048 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.048 13:32:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.048 ************************************ 00:03:41.048 END TEST skip_rpc 00:03:41.048 ************************************ 00:03:41.048 13:32:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:41.048 13:32:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.048 13:32:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.048 13:32:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.048 ************************************ 00:03:41.048 START TEST skip_rpc_with_json 00:03:41.048 ************************************ 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=441883 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 441883 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 441883 ']' 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:41.048 13:32:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.048 [2024-07-25 13:32:37.869027] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:03:41.048 [2024-07-25 13:32:37.869121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid441883 ] 00:03:41.048 EAL: No free 2048 kB hugepages reported on node 1 00:03:41.048 [2024-07-25 13:32:37.925230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.048 [2024-07-25 13:32:38.022159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.306 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:41.306 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:41.306 13:32:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:41.306 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.306 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.306 [2024-07-25 13:32:38.266906] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:41.306 request: 00:03:41.306 { 00:03:41.306 "trtype": "tcp", 00:03:41.306 "method": "nvmf_get_transports", 00:03:41.306 "req_id": 1 00:03:41.306 } 00:03:41.306 Got JSON-RPC error response 00:03:41.306 response: 00:03:41.306 { 00:03:41.306 "code": -19, 00:03:41.306 "message": "No such device" 00:03:41.306 } 00:03:41.306 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:41.306 13:32:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:41.307 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.307 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.307 [2024-07-25 13:32:38.279021] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:41.307 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.307 13:32:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:41.307 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:41.307 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.565 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:41.565 13:32:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:41.565 { 00:03:41.565 "subsystems": [ 00:03:41.565 { 00:03:41.565 "subsystem": "vfio_user_target", 00:03:41.565 "config": null 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "subsystem": "keyring", 00:03:41.565 "config": [] 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "subsystem": "iobuf", 00:03:41.565 "config": [ 00:03:41.565 { 00:03:41.565 "method": "iobuf_set_options", 00:03:41.565 "params": { 00:03:41.565 "small_pool_count": 8192, 00:03:41.565 "large_pool_count": 1024, 00:03:41.565 "small_bufsize": 8192, 00:03:41.565 "large_bufsize": 135168 00:03:41.565 } 00:03:41.565 } 00:03:41.565 ] 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "subsystem": "sock", 00:03:41.565 "config": [ 00:03:41.565 { 00:03:41.565 "method": "sock_set_default_impl", 00:03:41.565 "params": { 00:03:41.565 "impl_name": "posix" 00:03:41.565 } 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "method": "sock_impl_set_options", 00:03:41.565 "params": { 00:03:41.565 "impl_name": "ssl", 00:03:41.565 "recv_buf_size": 4096, 00:03:41.565 "send_buf_size": 4096, 00:03:41.565 "enable_recv_pipe": true, 00:03:41.565 "enable_quickack": false, 00:03:41.565 "enable_placement_id": 0, 00:03:41.565 "enable_zerocopy_send_server": true, 00:03:41.565 "enable_zerocopy_send_client": false, 00:03:41.565 "zerocopy_threshold": 0, 00:03:41.565 "tls_version": 0, 00:03:41.565 "enable_ktls": false 00:03:41.565 } 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "method": "sock_impl_set_options", 00:03:41.565 "params": { 00:03:41.565 "impl_name": "posix", 00:03:41.565 "recv_buf_size": 2097152, 00:03:41.565 "send_buf_size": 2097152, 00:03:41.565 "enable_recv_pipe": true, 00:03:41.565 "enable_quickack": false, 00:03:41.565 "enable_placement_id": 0, 00:03:41.565 "enable_zerocopy_send_server": true, 00:03:41.565 "enable_zerocopy_send_client": false, 00:03:41.565 "zerocopy_threshold": 0, 00:03:41.565 "tls_version": 0, 00:03:41.565 "enable_ktls": false 00:03:41.565 } 00:03:41.565 } 00:03:41.565 ] 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "subsystem": "vmd", 00:03:41.565 "config": [] 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "subsystem": "accel", 00:03:41.565 "config": [ 00:03:41.565 { 00:03:41.565 "method": "accel_set_options", 00:03:41.565 "params": { 00:03:41.565 "small_cache_size": 128, 00:03:41.565 "large_cache_size": 16, 00:03:41.565 "task_count": 2048, 00:03:41.565 "sequence_count": 2048, 00:03:41.565 "buf_count": 2048 00:03:41.565 } 00:03:41.565 } 00:03:41.565 ] 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "subsystem": "bdev", 00:03:41.565 "config": [ 00:03:41.565 { 00:03:41.565 "method": "bdev_set_options", 00:03:41.565 "params": { 00:03:41.565 "bdev_io_pool_size": 65535, 00:03:41.565 "bdev_io_cache_size": 256, 00:03:41.565 "bdev_auto_examine": true, 00:03:41.565 "iobuf_small_cache_size": 128, 00:03:41.565 "iobuf_large_cache_size": 16 00:03:41.565 } 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "method": "bdev_raid_set_options", 00:03:41.565 "params": { 00:03:41.565 "process_window_size_kb": 1024, 00:03:41.565 "process_max_bandwidth_mb_sec": 0 00:03:41.565 } 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "method": "bdev_iscsi_set_options", 00:03:41.565 "params": { 00:03:41.565 "timeout_sec": 30 00:03:41.565 } 00:03:41.565 }, 00:03:41.565 { 00:03:41.565 "method": "bdev_nvme_set_options", 00:03:41.565 "params": { 00:03:41.565 "action_on_timeout": "none", 00:03:41.565 "timeout_us": 0, 00:03:41.565 "timeout_admin_us": 0, 00:03:41.565 "keep_alive_timeout_ms": 10000, 00:03:41.565 "arbitration_burst": 0, 00:03:41.565 "low_priority_weight": 0, 00:03:41.565 "medium_priority_weight": 0, 00:03:41.565 "high_priority_weight": 0, 00:03:41.565 "nvme_adminq_poll_period_us": 10000, 00:03:41.565 "nvme_ioq_poll_period_us": 0, 00:03:41.565 "io_queue_requests": 0, 00:03:41.565 "delay_cmd_submit": true, 00:03:41.565 "transport_retry_count": 4, 00:03:41.565 "bdev_retry_count": 3, 00:03:41.565 "transport_ack_timeout": 0, 00:03:41.565 "ctrlr_loss_timeout_sec": 0, 00:03:41.565 "reconnect_delay_sec": 0, 00:03:41.565 "fast_io_fail_timeout_sec": 0, 00:03:41.565 "disable_auto_failback": false, 00:03:41.565 "generate_uuids": false, 00:03:41.565 "transport_tos": 0, 00:03:41.565 "nvme_error_stat": false, 00:03:41.565 "rdma_srq_size": 0, 00:03:41.565 "io_path_stat": false, 00:03:41.565 "allow_accel_sequence": false, 00:03:41.565 "rdma_max_cq_size": 0, 00:03:41.565 "rdma_cm_event_timeout_ms": 0, 00:03:41.565 "dhchap_digests": [ 00:03:41.565 "sha256", 00:03:41.565 "sha384", 00:03:41.565 "sha512" 00:03:41.565 ], 00:03:41.565 "dhchap_dhgroups": [ 00:03:41.565 "null", 00:03:41.565 "ffdhe2048", 00:03:41.565 "ffdhe3072", 00:03:41.565 "ffdhe4096", 00:03:41.565 "ffdhe6144", 00:03:41.565 "ffdhe8192" 00:03:41.565 ] 00:03:41.565 } 00:03:41.565 }, 00:03:41.565 { 00:03:41.566 "method": "bdev_nvme_set_hotplug", 00:03:41.566 "params": { 00:03:41.566 "period_us": 100000, 00:03:41.566 "enable": false 00:03:41.566 } 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "method": "bdev_wait_for_examine" 00:03:41.566 } 00:03:41.566 ] 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "subsystem": "scsi", 00:03:41.566 "config": null 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "subsystem": "scheduler", 00:03:41.566 "config": [ 00:03:41.566 { 00:03:41.566 "method": "framework_set_scheduler", 00:03:41.566 "params": { 00:03:41.566 "name": "static" 00:03:41.566 } 00:03:41.566 } 00:03:41.566 ] 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "subsystem": "vhost_scsi", 00:03:41.566 "config": [] 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "subsystem": "vhost_blk", 00:03:41.566 "config": [] 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "subsystem": "ublk", 00:03:41.566 "config": [] 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "subsystem": "nbd", 00:03:41.566 "config": [] 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "subsystem": "nvmf", 00:03:41.566 "config": [ 00:03:41.566 { 00:03:41.566 "method": "nvmf_set_config", 00:03:41.566 "params": { 00:03:41.566 "discovery_filter": "match_any", 00:03:41.566 "admin_cmd_passthru": { 00:03:41.566 "identify_ctrlr": false 00:03:41.566 } 00:03:41.566 } 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "method": "nvmf_set_max_subsystems", 00:03:41.566 "params": { 00:03:41.566 "max_subsystems": 1024 00:03:41.566 } 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "method": "nvmf_set_crdt", 00:03:41.566 "params": { 00:03:41.566 "crdt1": 0, 00:03:41.566 "crdt2": 0, 00:03:41.566 "crdt3": 0 00:03:41.566 } 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "method": "nvmf_create_transport", 00:03:41.566 "params": { 00:03:41.566 "trtype": "TCP", 00:03:41.566 "max_queue_depth": 128, 00:03:41.566 "max_io_qpairs_per_ctrlr": 127, 00:03:41.566 "in_capsule_data_size": 4096, 00:03:41.566 "max_io_size": 131072, 00:03:41.566 "io_unit_size": 131072, 00:03:41.566 "max_aq_depth": 128, 00:03:41.566 "num_shared_buffers": 511, 00:03:41.566 "buf_cache_size": 4294967295, 00:03:41.566 "dif_insert_or_strip": false, 00:03:41.566 "zcopy": false, 00:03:41.566 "c2h_success": true, 00:03:41.566 "sock_priority": 0, 00:03:41.566 "abort_timeout_sec": 1, 00:03:41.566 "ack_timeout": 0, 00:03:41.566 "data_wr_pool_size": 0 00:03:41.566 } 00:03:41.566 } 00:03:41.566 ] 00:03:41.566 }, 00:03:41.566 { 00:03:41.566 "subsystem": "iscsi", 00:03:41.566 "config": [ 00:03:41.566 { 00:03:41.566 "method": "iscsi_set_options", 00:03:41.566 "params": { 00:03:41.566 "node_base": "iqn.2016-06.io.spdk", 00:03:41.566 "max_sessions": 128, 00:03:41.566 "max_connections_per_session": 2, 00:03:41.566 "max_queue_depth": 64, 00:03:41.566 "default_time2wait": 2, 00:03:41.566 "default_time2retain": 20, 00:03:41.566 "first_burst_length": 8192, 00:03:41.566 "immediate_data": true, 00:03:41.566 "allow_duplicated_isid": false, 00:03:41.566 "error_recovery_level": 0, 00:03:41.566 "nop_timeout": 60, 00:03:41.566 "nop_in_interval": 30, 00:03:41.566 "disable_chap": false, 00:03:41.566 "require_chap": false, 00:03:41.566 "mutual_chap": false, 00:03:41.566 "chap_group": 0, 00:03:41.566 "max_large_datain_per_connection": 64, 00:03:41.566 "max_r2t_per_connection": 4, 00:03:41.566 "pdu_pool_size": 36864, 00:03:41.566 "immediate_data_pool_size": 16384, 00:03:41.566 "data_out_pool_size": 2048 00:03:41.566 } 00:03:41.566 } 00:03:41.566 ] 00:03:41.566 } 00:03:41.566 ] 00:03:41.566 } 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 441883 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 441883 ']' 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 441883 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 441883 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 441883' 00:03:41.566 killing process with pid 441883 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 441883 00:03:41.566 13:32:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 441883 00:03:42.132 13:32:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=442023 00:03:42.132 13:32:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.132 13:32:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 442023 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 442023 ']' 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 442023 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 442023 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 442023' 00:03:47.393 killing process with pid 442023 00:03:47.393 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 442023 00:03:47.394 13:32:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 442023 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.394 00:03:47.394 real 0m6.518s 00:03:47.394 user 0m6.159s 00:03:47.394 sys 0m0.635s 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.394 ************************************ 00:03:47.394 END TEST skip_rpc_with_json 00:03:47.394 ************************************ 00:03:47.394 13:32:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:47.394 13:32:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.394 13:32:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.394 13:32:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.394 ************************************ 00:03:47.394 START TEST skip_rpc_with_delay 00:03:47.394 ************************************ 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:47.394 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.653 [2024-07-25 13:32:44.437054] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:47.653 [2024-07-25 13:32:44.437159] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:47.653 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:47.653 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:47.653 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:47.653 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:47.653 00:03:47.653 real 0m0.066s 00:03:47.653 user 0m0.044s 00:03:47.653 sys 0m0.021s 00:03:47.653 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:47.653 13:32:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:47.653 ************************************ 00:03:47.653 END TEST skip_rpc_with_delay 00:03:47.653 ************************************ 00:03:47.653 13:32:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:47.653 13:32:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:47.653 13:32:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:47.653 13:32:44 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.653 13:32:44 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.653 13:32:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.653 ************************************ 00:03:47.653 START TEST exit_on_failed_rpc_init 00:03:47.653 ************************************ 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=442741 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 442741 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 442741 ']' 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:47.653 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:47.653 [2024-07-25 13:32:44.554279] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:03:47.653 [2024-07-25 13:32:44.554365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442741 ] 00:03:47.653 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.653 [2024-07-25 13:32:44.609407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.911 [2024-07-25 13:32:44.709474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.169 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:48.170 13:32:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.170 [2024-07-25 13:32:45.006017] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:03:48.170 [2024-07-25 13:32:45.006107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442746 ] 00:03:48.170 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.170 [2024-07-25 13:32:45.061488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.170 [2024-07-25 13:32:45.169751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:48.170 [2024-07-25 13:32:45.169872] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:48.170 [2024-07-25 13:32:45.169891] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:48.170 [2024-07-25 13:32:45.169902] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 442741 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 442741 ']' 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 442741 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 442741 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 442741' 00:03:48.428 killing process with pid 442741 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 442741 00:03:48.428 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 442741 00:03:48.994 00:03:48.994 real 0m1.257s 00:03:48.994 user 0m1.417s 00:03:48.994 sys 0m0.434s 00:03:48.994 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.994 13:32:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.994 ************************************ 00:03:48.994 END TEST exit_on_failed_rpc_init 00:03:48.994 ************************************ 00:03:48.994 13:32:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.994 00:03:48.994 real 0m13.553s 00:03:48.994 user 0m12.876s 00:03:48.994 sys 0m1.570s 00:03:48.994 13:32:45 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.994 13:32:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.994 ************************************ 00:03:48.994 END TEST skip_rpc 00:03:48.994 ************************************ 00:03:48.994 13:32:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:48.994 13:32:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.994 13:32:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.994 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:03:48.994 ************************************ 00:03:48.994 START TEST rpc_client 00:03:48.994 ************************************ 00:03:48.994 13:32:45 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:48.994 * Looking for test storage... 00:03:48.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:48.994 13:32:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:48.994 OK 00:03:48.994 13:32:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:48.994 00:03:48.994 real 0m0.071s 00:03:48.994 user 0m0.033s 00:03:48.994 sys 0m0.043s 00:03:48.994 13:32:45 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.994 13:32:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:48.994 ************************************ 00:03:48.994 END TEST rpc_client 00:03:48.994 ************************************ 00:03:48.994 13:32:45 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:48.994 13:32:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.994 13:32:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.994 13:32:45 -- common/autotest_common.sh@10 -- # set +x 00:03:48.994 ************************************ 00:03:48.994 START TEST json_config 00:03:48.994 ************************************ 00:03:48.994 13:32:45 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:48.994 13:32:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.994 13:32:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:48.995 13:32:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.995 13:32:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.995 13:32:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.995 13:32:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.995 13:32:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.995 13:32:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.995 13:32:45 json_config -- paths/export.sh@5 -- # export PATH 00:03:48.995 13:32:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@47 -- # : 0 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:48.995 13:32:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:03:48.995 INFO: JSON configuration test init 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.995 13:32:46 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:03:48.995 13:32:46 json_config -- json_config/common.sh@9 -- # local app=target 00:03:48.995 13:32:46 json_config -- json_config/common.sh@10 -- # shift 00:03:48.995 13:32:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:48.995 13:32:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:48.995 13:32:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:48.995 13:32:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.995 13:32:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.995 13:32:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=442988 00:03:48.995 13:32:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:48.995 13:32:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:48.995 Waiting for target to run... 00:03:48.995 13:32:46 json_config -- json_config/common.sh@25 -- # waitforlisten 442988 /var/tmp/spdk_tgt.sock 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@831 -- # '[' -z 442988 ']' 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:48.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:48.995 13:32:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.253 [2024-07-25 13:32:46.059826] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:03:49.253 [2024-07-25 13:32:46.059926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid442988 ] 00:03:49.253 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.511 [2024-07-25 13:32:46.397126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.511 [2024-07-25 13:32:46.476023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.075 13:32:46 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:50.075 13:32:46 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:50.075 13:32:46 json_config -- json_config/common.sh@26 -- # echo '' 00:03:50.075 00:03:50.075 13:32:46 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:03:50.075 13:32:46 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:03:50.075 13:32:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:50.075 13:32:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.075 13:32:46 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:03:50.075 13:32:46 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:03:50.075 13:32:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:50.075 13:32:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.075 13:32:47 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:50.075 13:32:47 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:03:50.075 13:32:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:53.361 13:32:50 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:03:53.361 13:32:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:53.361 13:32:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:53.361 13:32:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.361 13:32:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:53.361 13:32:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:53.361 13:32:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:53.361 13:32:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:53.361 13:32:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:53.361 13:32:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@51 -- # sort 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:03:53.619 13:32:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:53.619 13:32:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@59 -- # return 0 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:03:53.619 13:32:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:53.619 13:32:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:03:53.619 13:32:50 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.619 13:32:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.877 MallocForNvmf0 00:03:53.877 13:32:50 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:53.877 13:32:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:54.135 MallocForNvmf1 00:03:54.135 13:32:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:54.135 13:32:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:54.393 [2024-07-25 13:32:51.173665] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.393 13:32:51 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:54.393 13:32:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:54.651 13:32:51 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:54.651 13:32:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:54.651 13:32:51 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:54.651 13:32:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:54.909 13:32:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:54.909 13:32:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:55.166 [2024-07-25 13:32:52.160872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.167 13:32:52 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:03:55.167 13:32:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:55.167 13:32:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.424 13:32:52 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:03:55.424 13:32:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:55.424 13:32:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.424 13:32:52 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:03:55.424 13:32:52 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.424 13:32:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.424 MallocBdevForConfigChangeCheck 00:03:55.681 13:32:52 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:03:55.681 13:32:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:55.681 13:32:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.681 13:32:52 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:03:55.681 13:32:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.938 13:32:52 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:03:55.938 INFO: shutting down applications... 00:03:55.938 13:32:52 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:03:55.938 13:32:52 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:03:55.938 13:32:52 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:03:55.938 13:32:52 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:57.835 Calling clear_iscsi_subsystem 00:03:57.835 Calling clear_nvmf_subsystem 00:03:57.835 Calling clear_nbd_subsystem 00:03:57.835 Calling clear_ublk_subsystem 00:03:57.835 Calling clear_vhost_blk_subsystem 00:03:57.835 Calling clear_vhost_scsi_subsystem 00:03:57.835 Calling clear_bdev_subsystem 00:03:57.835 13:32:54 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:57.835 13:32:54 json_config -- json_config/json_config.sh@347 -- # count=100 00:03:57.835 13:32:54 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:03:57.835 13:32:54 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.835 13:32:54 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:57.835 13:32:54 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:57.835 13:32:54 json_config -- json_config/json_config.sh@349 -- # break 00:03:57.835 13:32:54 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:03:57.835 13:32:54 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:03:57.835 13:32:54 json_config -- json_config/common.sh@31 -- # local app=target 00:03:57.835 13:32:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:57.835 13:32:54 json_config -- json_config/common.sh@35 -- # [[ -n 442988 ]] 00:03:57.835 13:32:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 442988 00:03:57.835 13:32:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:57.835 13:32:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.835 13:32:54 json_config -- json_config/common.sh@41 -- # kill -0 442988 00:03:57.835 13:32:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:58.404 13:32:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:58.404 13:32:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.404 13:32:55 json_config -- json_config/common.sh@41 -- # kill -0 442988 00:03:58.404 13:32:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:58.404 13:32:55 json_config -- json_config/common.sh@43 -- # break 00:03:58.404 13:32:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:58.404 13:32:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:58.404 SPDK target shutdown done 00:03:58.404 13:32:55 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:03:58.404 INFO: relaunching applications... 00:03:58.404 13:32:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.404 13:32:55 json_config -- json_config/common.sh@9 -- # local app=target 00:03:58.404 13:32:55 json_config -- json_config/common.sh@10 -- # shift 00:03:58.404 13:32:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.404 13:32:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.404 13:32:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.404 13:32:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.404 13:32:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.404 13:32:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=444295 00:03:58.404 13:32:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.404 13:32:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.404 Waiting for target to run... 00:03:58.404 13:32:55 json_config -- json_config/common.sh@25 -- # waitforlisten 444295 /var/tmp/spdk_tgt.sock 00:03:58.404 13:32:55 json_config -- common/autotest_common.sh@831 -- # '[' -z 444295 ']' 00:03:58.404 13:32:55 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.404 13:32:55 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:58.404 13:32:55 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.404 13:32:55 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:58.404 13:32:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.404 [2024-07-25 13:32:55.424258] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:03:58.404 [2024-07-25 13:32:55.424346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444295 ] 00:03:58.664 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.922 [2024-07-25 13:32:55.944774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.180 [2024-07-25 13:32:56.037925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.490 [2024-07-25 13:32:59.067204] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.490 [2024-07-25 13:32:59.099697] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:03.055 13:32:59 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:03.055 13:32:59 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:03.055 13:32:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:03.055 00:04:03.055 13:32:59 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:03.055 13:32:59 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:03.055 INFO: Checking if target configuration is the same... 00:04:03.055 13:32:59 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.055 13:32:59 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:03.055 13:32:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.055 + '[' 2 -ne 2 ']' 00:04:03.055 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.055 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:03.055 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.055 +++ basename /dev/fd/62 00:04:03.055 ++ mktemp /tmp/62.XXX 00:04:03.055 + tmp_file_1=/tmp/62.MK6 00:04:03.055 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.055 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.055 + tmp_file_2=/tmp/spdk_tgt_config.json.L8g 00:04:03.055 + ret=0 00:04:03.055 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.313 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.313 + diff -u /tmp/62.MK6 /tmp/spdk_tgt_config.json.L8g 00:04:03.313 + echo 'INFO: JSON config files are the same' 00:04:03.313 INFO: JSON config files are the same 00:04:03.313 + rm /tmp/62.MK6 /tmp/spdk_tgt_config.json.L8g 00:04:03.313 + exit 0 00:04:03.313 13:33:00 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:03.313 13:33:00 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:03.313 INFO: changing configuration and checking if this can be detected... 00:04:03.313 13:33:00 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.313 13:33:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.571 13:33:00 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.571 13:33:00 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:03.571 13:33:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.571 + '[' 2 -ne 2 ']' 00:04:03.571 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.571 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:03.571 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.571 +++ basename /dev/fd/62 00:04:03.571 ++ mktemp /tmp/62.XXX 00:04:03.571 + tmp_file_1=/tmp/62.mny 00:04:03.571 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.571 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.571 + tmp_file_2=/tmp/spdk_tgt_config.json.xq0 00:04:03.571 + ret=0 00:04:03.571 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.137 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.137 + diff -u /tmp/62.mny /tmp/spdk_tgt_config.json.xq0 00:04:04.137 + ret=1 00:04:04.137 + echo '=== Start of file: /tmp/62.mny ===' 00:04:04.137 + cat /tmp/62.mny 00:04:04.137 + echo '=== End of file: /tmp/62.mny ===' 00:04:04.137 + echo '' 00:04:04.137 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xq0 ===' 00:04:04.137 + cat /tmp/spdk_tgt_config.json.xq0 00:04:04.137 + echo '=== End of file: /tmp/spdk_tgt_config.json.xq0 ===' 00:04:04.137 + echo '' 00:04:04.137 + rm /tmp/62.mny /tmp/spdk_tgt_config.json.xq0 00:04:04.137 + exit 1 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:04.137 INFO: configuration change detected. 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:04.137 13:33:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.137 13:33:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@321 -- # [[ -n 444295 ]] 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:04.137 13:33:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.137 13:33:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:04.137 13:33:00 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:04.138 13:33:00 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:04.138 13:33:00 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:04.138 13:33:00 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:04.138 13:33:00 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:04.138 13:33:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.138 13:33:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.138 13:33:00 json_config -- json_config/json_config.sh@327 -- # killprocess 444295 00:04:04.138 13:33:00 json_config -- common/autotest_common.sh@950 -- # '[' -z 444295 ']' 00:04:04.138 13:33:00 json_config -- common/autotest_common.sh@954 -- # kill -0 444295 00:04:04.138 13:33:00 json_config -- common/autotest_common.sh@955 -- # uname 00:04:04.138 13:33:00 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:04.138 13:33:01 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 444295 00:04:04.138 13:33:01 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:04.138 13:33:01 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:04.138 13:33:01 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 444295' 00:04:04.138 killing process with pid 444295 00:04:04.138 13:33:01 json_config -- common/autotest_common.sh@969 -- # kill 444295 00:04:04.138 13:33:01 json_config -- common/autotest_common.sh@974 -- # wait 444295 00:04:06.035 13:33:02 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.035 13:33:02 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:06.035 13:33:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.035 13:33:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.035 13:33:02 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:06.035 13:33:02 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:06.035 INFO: Success 00:04:06.035 00:04:06.035 real 0m16.734s 00:04:06.035 user 0m18.727s 00:04:06.035 sys 0m2.010s 00:04:06.035 13:33:02 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.035 13:33:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.035 ************************************ 00:04:06.035 END TEST json_config 00:04:06.035 ************************************ 00:04:06.035 13:33:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.035 13:33:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.035 13:33:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.035 13:33:02 -- common/autotest_common.sh@10 -- # set +x 00:04:06.035 ************************************ 00:04:06.035 START TEST json_config_extra_key 00:04:06.035 ************************************ 00:04:06.035 13:33:02 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:06.035 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.035 13:33:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:06.035 13:33:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.035 13:33:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.035 13:33:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.035 13:33:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.035 13:33:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.036 13:33:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.036 13:33:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:06.036 13:33:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.036 13:33:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:06.036 13:33:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:06.036 13:33:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:06.036 13:33:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.036 13:33:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.036 13:33:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.036 13:33:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:06.036 13:33:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:06.036 13:33:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:06.036 INFO: launching applications... 00:04:06.036 13:33:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=445347 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.036 Waiting for target to run... 00:04:06.036 13:33:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 445347 /var/tmp/spdk_tgt.sock 00:04:06.036 13:33:02 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 445347 ']' 00:04:06.036 13:33:02 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.036 13:33:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:06.036 13:33:02 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.036 13:33:02 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:06.036 13:33:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:06.036 [2024-07-25 13:33:02.835601] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:06.036 [2024-07-25 13:33:02.835685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445347 ] 00:04:06.036 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.600 [2024-07-25 13:33:03.333136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.600 [2024-07-25 13:33:03.426509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.857 13:33:03 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.857 13:33:03 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:06.857 13:33:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:06.857 00:04:06.857 13:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:06.857 INFO: shutting down applications... 00:04:06.857 13:33:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:06.857 13:33:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:06.857 13:33:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:06.857 13:33:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 445347 ]] 00:04:06.857 13:33:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 445347 00:04:06.857 13:33:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:06.857 13:33:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.857 13:33:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 445347 00:04:06.857 13:33:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:07.422 13:33:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:07.422 13:33:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.422 13:33:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 445347 00:04:07.422 13:33:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:07.422 13:33:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:07.422 13:33:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:07.422 13:33:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:07.422 SPDK target shutdown done 00:04:07.422 13:33:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:07.422 Success 00:04:07.422 00:04:07.422 real 0m1.552s 00:04:07.422 user 0m1.386s 00:04:07.422 sys 0m0.603s 00:04:07.422 13:33:04 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.422 13:33:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:07.422 ************************************ 00:04:07.422 END TEST json_config_extra_key 00:04:07.422 ************************************ 00:04:07.423 13:33:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.423 13:33:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.423 13:33:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.423 13:33:04 -- common/autotest_common.sh@10 -- # set +x 00:04:07.423 ************************************ 00:04:07.423 START TEST alias_rpc 00:04:07.423 ************************************ 00:04:07.423 13:33:04 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.423 * Looking for test storage... 00:04:07.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:07.423 13:33:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:07.423 13:33:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=445595 00:04:07.423 13:33:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.423 13:33:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 445595 00:04:07.423 13:33:04 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 445595 ']' 00:04:07.423 13:33:04 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.423 13:33:04 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:07.423 13:33:04 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.423 13:33:04 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:07.423 13:33:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.423 [2024-07-25 13:33:04.444252] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:07.423 [2024-07-25 13:33:04.444336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445595 ] 00:04:07.681 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.681 [2024-07-25 13:33:04.503316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.681 [2024-07-25 13:33:04.608770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.938 13:33:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:07.939 13:33:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:07.939 13:33:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:08.195 13:33:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 445595 00:04:08.195 13:33:05 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 445595 ']' 00:04:08.195 13:33:05 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 445595 00:04:08.195 13:33:05 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:08.195 13:33:05 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:08.195 13:33:05 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 445595 00:04:08.195 13:33:05 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:08.195 13:33:05 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:08.195 13:33:05 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 445595' 00:04:08.195 killing process with pid 445595 00:04:08.196 13:33:05 alias_rpc -- common/autotest_common.sh@969 -- # kill 445595 00:04:08.196 13:33:05 alias_rpc -- common/autotest_common.sh@974 -- # wait 445595 00:04:08.760 00:04:08.760 real 0m1.260s 00:04:08.760 user 0m1.364s 00:04:08.760 sys 0m0.414s 00:04:08.760 13:33:05 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.760 13:33:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.760 ************************************ 00:04:08.760 END TEST alias_rpc 00:04:08.760 ************************************ 00:04:08.760 13:33:05 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:08.760 13:33:05 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.760 13:33:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.760 13:33:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.760 13:33:05 -- common/autotest_common.sh@10 -- # set +x 00:04:08.760 ************************************ 00:04:08.760 START TEST spdkcli_tcp 00:04:08.760 ************************************ 00:04:08.760 13:33:05 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.760 * Looking for test storage... 00:04:08.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:08.760 13:33:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.760 13:33:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=445832 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:08.760 13:33:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 445832 00:04:08.761 13:33:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 445832 ']' 00:04:08.761 13:33:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.761 13:33:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:08.761 13:33:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.761 13:33:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:08.761 13:33:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.761 [2024-07-25 13:33:05.758123] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:08.761 [2024-07-25 13:33:05.758219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid445832 ] 00:04:08.761 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.018 [2024-07-25 13:33:05.815717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:09.018 [2024-07-25 13:33:05.922263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.018 [2024-07-25 13:33:05.922267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.275 13:33:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:09.275 13:33:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:09.275 13:33:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=445850 00:04:09.275 13:33:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:09.275 13:33:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:09.531 [ 00:04:09.531 "bdev_malloc_delete", 00:04:09.531 "bdev_malloc_create", 00:04:09.531 "bdev_null_resize", 00:04:09.531 "bdev_null_delete", 00:04:09.531 "bdev_null_create", 00:04:09.531 "bdev_nvme_cuse_unregister", 00:04:09.531 "bdev_nvme_cuse_register", 00:04:09.531 "bdev_opal_new_user", 00:04:09.531 "bdev_opal_set_lock_state", 00:04:09.531 "bdev_opal_delete", 00:04:09.531 "bdev_opal_get_info", 00:04:09.531 "bdev_opal_create", 00:04:09.531 "bdev_nvme_opal_revert", 00:04:09.531 "bdev_nvme_opal_init", 00:04:09.531 "bdev_nvme_send_cmd", 00:04:09.531 "bdev_nvme_get_path_iostat", 00:04:09.531 "bdev_nvme_get_mdns_discovery_info", 00:04:09.531 "bdev_nvme_stop_mdns_discovery", 00:04:09.531 "bdev_nvme_start_mdns_discovery", 00:04:09.531 "bdev_nvme_set_multipath_policy", 00:04:09.531 "bdev_nvme_set_preferred_path", 00:04:09.531 "bdev_nvme_get_io_paths", 00:04:09.531 "bdev_nvme_remove_error_injection", 00:04:09.531 "bdev_nvme_add_error_injection", 00:04:09.531 "bdev_nvme_get_discovery_info", 00:04:09.531 "bdev_nvme_stop_discovery", 00:04:09.531 "bdev_nvme_start_discovery", 00:04:09.531 "bdev_nvme_get_controller_health_info", 00:04:09.531 "bdev_nvme_disable_controller", 00:04:09.531 "bdev_nvme_enable_controller", 00:04:09.531 "bdev_nvme_reset_controller", 00:04:09.531 "bdev_nvme_get_transport_statistics", 00:04:09.531 "bdev_nvme_apply_firmware", 00:04:09.531 "bdev_nvme_detach_controller", 00:04:09.531 "bdev_nvme_get_controllers", 00:04:09.531 "bdev_nvme_attach_controller", 00:04:09.531 "bdev_nvme_set_hotplug", 00:04:09.531 "bdev_nvme_set_options", 00:04:09.531 "bdev_passthru_delete", 00:04:09.531 "bdev_passthru_create", 00:04:09.531 "bdev_lvol_set_parent_bdev", 00:04:09.531 "bdev_lvol_set_parent", 00:04:09.531 "bdev_lvol_check_shallow_copy", 00:04:09.531 "bdev_lvol_start_shallow_copy", 00:04:09.531 "bdev_lvol_grow_lvstore", 00:04:09.531 "bdev_lvol_get_lvols", 00:04:09.531 "bdev_lvol_get_lvstores", 00:04:09.531 "bdev_lvol_delete", 00:04:09.531 "bdev_lvol_set_read_only", 00:04:09.531 "bdev_lvol_resize", 00:04:09.531 "bdev_lvol_decouple_parent", 00:04:09.531 "bdev_lvol_inflate", 00:04:09.531 "bdev_lvol_rename", 00:04:09.531 "bdev_lvol_clone_bdev", 00:04:09.531 "bdev_lvol_clone", 00:04:09.531 "bdev_lvol_snapshot", 00:04:09.531 "bdev_lvol_create", 00:04:09.531 "bdev_lvol_delete_lvstore", 00:04:09.531 "bdev_lvol_rename_lvstore", 00:04:09.531 "bdev_lvol_create_lvstore", 00:04:09.531 "bdev_raid_set_options", 00:04:09.531 "bdev_raid_remove_base_bdev", 00:04:09.531 "bdev_raid_add_base_bdev", 00:04:09.531 "bdev_raid_delete", 00:04:09.531 "bdev_raid_create", 00:04:09.531 "bdev_raid_get_bdevs", 00:04:09.531 "bdev_error_inject_error", 00:04:09.531 "bdev_error_delete", 00:04:09.531 "bdev_error_create", 00:04:09.531 "bdev_split_delete", 00:04:09.531 "bdev_split_create", 00:04:09.531 "bdev_delay_delete", 00:04:09.531 "bdev_delay_create", 00:04:09.531 "bdev_delay_update_latency", 00:04:09.531 "bdev_zone_block_delete", 00:04:09.531 "bdev_zone_block_create", 00:04:09.531 "blobfs_create", 00:04:09.531 "blobfs_detect", 00:04:09.531 "blobfs_set_cache_size", 00:04:09.531 "bdev_aio_delete", 00:04:09.531 "bdev_aio_rescan", 00:04:09.531 "bdev_aio_create", 00:04:09.531 "bdev_ftl_set_property", 00:04:09.531 "bdev_ftl_get_properties", 00:04:09.531 "bdev_ftl_get_stats", 00:04:09.531 "bdev_ftl_unmap", 00:04:09.531 "bdev_ftl_unload", 00:04:09.531 "bdev_ftl_delete", 00:04:09.531 "bdev_ftl_load", 00:04:09.531 "bdev_ftl_create", 00:04:09.531 "bdev_virtio_attach_controller", 00:04:09.531 "bdev_virtio_scsi_get_devices", 00:04:09.531 "bdev_virtio_detach_controller", 00:04:09.531 "bdev_virtio_blk_set_hotplug", 00:04:09.531 "bdev_iscsi_delete", 00:04:09.531 "bdev_iscsi_create", 00:04:09.531 "bdev_iscsi_set_options", 00:04:09.531 "accel_error_inject_error", 00:04:09.531 "ioat_scan_accel_module", 00:04:09.531 "dsa_scan_accel_module", 00:04:09.531 "iaa_scan_accel_module", 00:04:09.531 "vfu_virtio_create_scsi_endpoint", 00:04:09.531 "vfu_virtio_scsi_remove_target", 00:04:09.531 "vfu_virtio_scsi_add_target", 00:04:09.531 "vfu_virtio_create_blk_endpoint", 00:04:09.531 "vfu_virtio_delete_endpoint", 00:04:09.531 "keyring_file_remove_key", 00:04:09.531 "keyring_file_add_key", 00:04:09.531 "keyring_linux_set_options", 00:04:09.531 "iscsi_get_histogram", 00:04:09.531 "iscsi_enable_histogram", 00:04:09.531 "iscsi_set_options", 00:04:09.531 "iscsi_get_auth_groups", 00:04:09.531 "iscsi_auth_group_remove_secret", 00:04:09.531 "iscsi_auth_group_add_secret", 00:04:09.531 "iscsi_delete_auth_group", 00:04:09.531 "iscsi_create_auth_group", 00:04:09.531 "iscsi_set_discovery_auth", 00:04:09.531 "iscsi_get_options", 00:04:09.531 "iscsi_target_node_request_logout", 00:04:09.531 "iscsi_target_node_set_redirect", 00:04:09.531 "iscsi_target_node_set_auth", 00:04:09.531 "iscsi_target_node_add_lun", 00:04:09.531 "iscsi_get_stats", 00:04:09.531 "iscsi_get_connections", 00:04:09.531 "iscsi_portal_group_set_auth", 00:04:09.531 "iscsi_start_portal_group", 00:04:09.531 "iscsi_delete_portal_group", 00:04:09.531 "iscsi_create_portal_group", 00:04:09.531 "iscsi_get_portal_groups", 00:04:09.531 "iscsi_delete_target_node", 00:04:09.531 "iscsi_target_node_remove_pg_ig_maps", 00:04:09.531 "iscsi_target_node_add_pg_ig_maps", 00:04:09.531 "iscsi_create_target_node", 00:04:09.531 "iscsi_get_target_nodes", 00:04:09.531 "iscsi_delete_initiator_group", 00:04:09.531 "iscsi_initiator_group_remove_initiators", 00:04:09.531 "iscsi_initiator_group_add_initiators", 00:04:09.531 "iscsi_create_initiator_group", 00:04:09.531 "iscsi_get_initiator_groups", 00:04:09.531 "nvmf_set_crdt", 00:04:09.531 "nvmf_set_config", 00:04:09.531 "nvmf_set_max_subsystems", 00:04:09.531 "nvmf_stop_mdns_prr", 00:04:09.531 "nvmf_publish_mdns_prr", 00:04:09.531 "nvmf_subsystem_get_listeners", 00:04:09.531 "nvmf_subsystem_get_qpairs", 00:04:09.531 "nvmf_subsystem_get_controllers", 00:04:09.531 "nvmf_get_stats", 00:04:09.531 "nvmf_get_transports", 00:04:09.531 "nvmf_create_transport", 00:04:09.531 "nvmf_get_targets", 00:04:09.531 "nvmf_delete_target", 00:04:09.531 "nvmf_create_target", 00:04:09.531 "nvmf_subsystem_allow_any_host", 00:04:09.531 "nvmf_subsystem_remove_host", 00:04:09.531 "nvmf_subsystem_add_host", 00:04:09.531 "nvmf_ns_remove_host", 00:04:09.531 "nvmf_ns_add_host", 00:04:09.531 "nvmf_subsystem_remove_ns", 00:04:09.531 "nvmf_subsystem_add_ns", 00:04:09.531 "nvmf_subsystem_listener_set_ana_state", 00:04:09.531 "nvmf_discovery_get_referrals", 00:04:09.531 "nvmf_discovery_remove_referral", 00:04:09.531 "nvmf_discovery_add_referral", 00:04:09.531 "nvmf_subsystem_remove_listener", 00:04:09.531 "nvmf_subsystem_add_listener", 00:04:09.531 "nvmf_delete_subsystem", 00:04:09.531 "nvmf_create_subsystem", 00:04:09.531 "nvmf_get_subsystems", 00:04:09.531 "env_dpdk_get_mem_stats", 00:04:09.531 "nbd_get_disks", 00:04:09.531 "nbd_stop_disk", 00:04:09.531 "nbd_start_disk", 00:04:09.531 "ublk_recover_disk", 00:04:09.531 "ublk_get_disks", 00:04:09.531 "ublk_stop_disk", 00:04:09.531 "ublk_start_disk", 00:04:09.531 "ublk_destroy_target", 00:04:09.531 "ublk_create_target", 00:04:09.531 "virtio_blk_create_transport", 00:04:09.531 "virtio_blk_get_transports", 00:04:09.531 "vhost_controller_set_coalescing", 00:04:09.531 "vhost_get_controllers", 00:04:09.531 "vhost_delete_controller", 00:04:09.531 "vhost_create_blk_controller", 00:04:09.531 "vhost_scsi_controller_remove_target", 00:04:09.531 "vhost_scsi_controller_add_target", 00:04:09.531 "vhost_start_scsi_controller", 00:04:09.531 "vhost_create_scsi_controller", 00:04:09.531 "thread_set_cpumask", 00:04:09.532 "framework_get_governor", 00:04:09.532 "framework_get_scheduler", 00:04:09.532 "framework_set_scheduler", 00:04:09.532 "framework_get_reactors", 00:04:09.532 "thread_get_io_channels", 00:04:09.532 "thread_get_pollers", 00:04:09.532 "thread_get_stats", 00:04:09.532 "framework_monitor_context_switch", 00:04:09.532 "spdk_kill_instance", 00:04:09.532 "log_enable_timestamps", 00:04:09.532 "log_get_flags", 00:04:09.532 "log_clear_flag", 00:04:09.532 "log_set_flag", 00:04:09.532 "log_get_level", 00:04:09.532 "log_set_level", 00:04:09.532 "log_get_print_level", 00:04:09.532 "log_set_print_level", 00:04:09.532 "framework_enable_cpumask_locks", 00:04:09.532 "framework_disable_cpumask_locks", 00:04:09.532 "framework_wait_init", 00:04:09.532 "framework_start_init", 00:04:09.532 "scsi_get_devices", 00:04:09.532 "bdev_get_histogram", 00:04:09.532 "bdev_enable_histogram", 00:04:09.532 "bdev_set_qos_limit", 00:04:09.532 "bdev_set_qd_sampling_period", 00:04:09.532 "bdev_get_bdevs", 00:04:09.532 "bdev_reset_iostat", 00:04:09.532 "bdev_get_iostat", 00:04:09.532 "bdev_examine", 00:04:09.532 "bdev_wait_for_examine", 00:04:09.532 "bdev_set_options", 00:04:09.532 "notify_get_notifications", 00:04:09.532 "notify_get_types", 00:04:09.532 "accel_get_stats", 00:04:09.532 "accel_set_options", 00:04:09.532 "accel_set_driver", 00:04:09.532 "accel_crypto_key_destroy", 00:04:09.532 "accel_crypto_keys_get", 00:04:09.532 "accel_crypto_key_create", 00:04:09.532 "accel_assign_opc", 00:04:09.532 "accel_get_module_info", 00:04:09.532 "accel_get_opc_assignments", 00:04:09.532 "vmd_rescan", 00:04:09.532 "vmd_remove_device", 00:04:09.532 "vmd_enable", 00:04:09.532 "sock_get_default_impl", 00:04:09.532 "sock_set_default_impl", 00:04:09.532 "sock_impl_set_options", 00:04:09.532 "sock_impl_get_options", 00:04:09.532 "iobuf_get_stats", 00:04:09.532 "iobuf_set_options", 00:04:09.532 "keyring_get_keys", 00:04:09.532 "framework_get_pci_devices", 00:04:09.532 "framework_get_config", 00:04:09.532 "framework_get_subsystems", 00:04:09.532 "vfu_tgt_set_base_path", 00:04:09.532 "trace_get_info", 00:04:09.532 "trace_get_tpoint_group_mask", 00:04:09.532 "trace_disable_tpoint_group", 00:04:09.532 "trace_enable_tpoint_group", 00:04:09.532 "trace_clear_tpoint_mask", 00:04:09.532 "trace_set_tpoint_mask", 00:04:09.532 "spdk_get_version", 00:04:09.532 "rpc_get_methods" 00:04:09.532 ] 00:04:09.532 13:33:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.532 13:33:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:09.532 13:33:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 445832 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 445832 ']' 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 445832 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 445832 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 445832' 00:04:09.532 killing process with pid 445832 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 445832 00:04:09.532 13:33:06 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 445832 00:04:10.095 00:04:10.095 real 0m1.263s 00:04:10.095 user 0m2.216s 00:04:10.095 sys 0m0.441s 00:04:10.095 13:33:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.095 13:33:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:10.095 ************************************ 00:04:10.095 END TEST spdkcli_tcp 00:04:10.095 ************************************ 00:04:10.095 13:33:06 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.095 13:33:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.095 13:33:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.095 13:33:06 -- common/autotest_common.sh@10 -- # set +x 00:04:10.095 ************************************ 00:04:10.095 START TEST dpdk_mem_utility 00:04:10.095 ************************************ 00:04:10.095 13:33:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.095 * Looking for test storage... 00:04:10.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:10.095 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.095 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=446038 00:04:10.095 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.095 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 446038 00:04:10.095 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 446038 ']' 00:04:10.095 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.095 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:10.095 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.095 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:10.095 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.095 [2024-07-25 13:33:07.065239] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:10.095 [2024-07-25 13:33:07.065324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446038 ] 00:04:10.095 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.095 [2024-07-25 13:33:07.123823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.353 [2024-07-25 13:33:07.237838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.611 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:10.611 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:10.611 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:10.611 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:10.611 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.611 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.611 { 00:04:10.611 "filename": "/tmp/spdk_mem_dump.txt" 00:04:10.611 } 00:04:10.611 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.611 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.611 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:10.611 1 heaps totaling size 814.000000 MiB 00:04:10.611 size: 814.000000 MiB heap id: 0 00:04:10.611 end heaps---------- 00:04:10.611 8 mempools totaling size 598.116089 MiB 00:04:10.611 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:10.611 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:10.611 size: 84.521057 MiB name: bdev_io_446038 00:04:10.611 size: 51.011292 MiB name: evtpool_446038 00:04:10.611 size: 50.003479 MiB name: msgpool_446038 00:04:10.611 size: 21.763794 MiB name: PDU_Pool 00:04:10.611 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:10.611 size: 0.026123 MiB name: Session_Pool 00:04:10.611 end mempools------- 00:04:10.611 6 memzones totaling size 4.142822 MiB 00:04:10.611 size: 1.000366 MiB name: RG_ring_0_446038 00:04:10.611 size: 1.000366 MiB name: RG_ring_1_446038 00:04:10.611 size: 1.000366 MiB name: RG_ring_4_446038 00:04:10.611 size: 1.000366 MiB name: RG_ring_5_446038 00:04:10.612 size: 0.125366 MiB name: RG_ring_2_446038 00:04:10.612 size: 0.015991 MiB name: RG_ring_3_446038 00:04:10.612 end memzones------- 00:04:10.612 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:10.612 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:10.612 list of free elements. size: 12.519348 MiB 00:04:10.612 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:10.612 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:10.612 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:10.612 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:10.612 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:10.612 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:10.612 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:10.612 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:10.612 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:10.612 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:10.612 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:10.612 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:10.612 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:10.612 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:10.612 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:10.612 list of standard malloc elements. size: 199.218079 MiB 00:04:10.612 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:10.612 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:10.612 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:10.612 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:10.612 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:10.612 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:10.612 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:10.612 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:10.612 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:10.612 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:10.612 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:10.612 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:10.612 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:10.612 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:10.612 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:10.612 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:10.612 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:10.612 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:10.612 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:10.612 list of memzone associated elements. size: 602.262573 MiB 00:04:10.612 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:10.612 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:10.612 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:10.612 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:10.612 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:10.612 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_446038_0 00:04:10.612 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:10.612 associated memzone info: size: 48.002930 MiB name: MP_evtpool_446038_0 00:04:10.612 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:10.612 associated memzone info: size: 48.002930 MiB name: MP_msgpool_446038_0 00:04:10.612 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:10.612 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:10.612 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:10.612 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:10.612 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:10.612 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_446038 00:04:10.612 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:10.612 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_446038 00:04:10.612 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:10.612 associated memzone info: size: 1.007996 MiB name: MP_evtpool_446038 00:04:10.612 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:10.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:10.612 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:10.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:10.612 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:10.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:10.612 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:10.612 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:10.612 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:10.612 associated memzone info: size: 1.000366 MiB name: RG_ring_0_446038 00:04:10.612 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:10.612 associated memzone info: size: 1.000366 MiB name: RG_ring_1_446038 00:04:10.612 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:10.612 associated memzone info: size: 1.000366 MiB name: RG_ring_4_446038 00:04:10.612 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:10.612 associated memzone info: size: 1.000366 MiB name: RG_ring_5_446038 00:04:10.612 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:10.613 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_446038 00:04:10.613 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:10.613 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:10.613 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:10.613 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:10.613 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:10.613 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:10.613 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:10.613 associated memzone info: size: 0.125366 MiB name: RG_ring_2_446038 00:04:10.613 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:10.613 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:10.613 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:10.613 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:10.613 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:10.613 associated memzone info: size: 0.015991 MiB name: RG_ring_3_446038 00:04:10.613 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:10.613 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:10.613 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:10.613 associated memzone info: size: 0.000183 MiB name: MP_msgpool_446038 00:04:10.613 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:10.613 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_446038 00:04:10.613 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:10.613 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:10.613 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:10.613 13:33:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 446038 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 446038 ']' 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 446038 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 446038 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 446038' 00:04:10.613 killing process with pid 446038 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 446038 00:04:10.613 13:33:07 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 446038 00:04:11.177 00:04:11.177 real 0m1.094s 00:04:11.177 user 0m1.053s 00:04:11.177 sys 0m0.411s 00:04:11.177 13:33:08 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.177 13:33:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:11.177 ************************************ 00:04:11.177 END TEST dpdk_mem_utility 00:04:11.177 ************************************ 00:04:11.177 13:33:08 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:11.177 13:33:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.177 13:33:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.177 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:04:11.177 ************************************ 00:04:11.177 START TEST event 00:04:11.177 ************************************ 00:04:11.177 13:33:08 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:11.177 * Looking for test storage... 00:04:11.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:11.177 13:33:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:11.177 13:33:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:11.177 13:33:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:11.177 13:33:08 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:11.177 13:33:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.177 13:33:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.177 ************************************ 00:04:11.177 START TEST event_perf 00:04:11.177 ************************************ 00:04:11.177 13:33:08 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:11.177 Running I/O for 1 seconds...[2024-07-25 13:33:08.192311] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:11.177 [2024-07-25 13:33:08.192393] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446589 ] 00:04:11.435 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.435 [2024-07-25 13:33:08.250259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.435 [2024-07-25 13:33:08.361787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.435 [2024-07-25 13:33:08.361851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.435 [2024-07-25 13:33:08.361920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.435 [2024-07-25 13:33:08.361923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.804 Running I/O for 1 seconds... 00:04:12.804 lcore 0: 232793 00:04:12.804 lcore 1: 232791 00:04:12.804 lcore 2: 232792 00:04:12.804 lcore 3: 232792 00:04:12.804 done. 00:04:12.804 00:04:12.804 real 0m1.296s 00:04:12.804 user 0m4.211s 00:04:12.804 sys 0m0.078s 00:04:12.804 13:33:09 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.804 13:33:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:12.804 ************************************ 00:04:12.804 END TEST event_perf 00:04:12.804 ************************************ 00:04:12.804 13:33:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:12.804 13:33:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:12.804 13:33:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.804 13:33:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.804 ************************************ 00:04:12.804 START TEST event_reactor 00:04:12.804 ************************************ 00:04:12.804 13:33:09 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:12.804 [2024-07-25 13:33:09.531074] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:12.804 [2024-07-25 13:33:09.531153] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid446896 ] 00:04:12.804 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.805 [2024-07-25 13:33:09.589593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.805 [2024-07-25 13:33:09.693620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.180 test_start 00:04:14.181 oneshot 00:04:14.181 tick 100 00:04:14.181 tick 100 00:04:14.181 tick 250 00:04:14.181 tick 100 00:04:14.181 tick 100 00:04:14.181 tick 100 00:04:14.181 tick 250 00:04:14.181 tick 500 00:04:14.181 tick 100 00:04:14.181 tick 100 00:04:14.181 tick 250 00:04:14.181 tick 100 00:04:14.181 tick 100 00:04:14.181 test_end 00:04:14.181 00:04:14.181 real 0m1.286s 00:04:14.181 user 0m1.210s 00:04:14.181 sys 0m0.072s 00:04:14.181 13:33:10 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.181 13:33:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:14.181 ************************************ 00:04:14.181 END TEST event_reactor 00:04:14.181 ************************************ 00:04:14.181 13:33:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:14.181 13:33:10 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:14.181 13:33:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.181 13:33:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.181 ************************************ 00:04:14.181 START TEST event_reactor_perf 00:04:14.181 ************************************ 00:04:14.181 13:33:10 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:14.181 [2024-07-25 13:33:10.869016] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:14.181 [2024-07-25 13:33:10.869094] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447060 ] 00:04:14.181 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.181 [2024-07-25 13:33:10.925722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.181 [2024-07-25 13:33:11.029693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.114 test_start 00:04:15.114 test_end 00:04:15.114 Performance: 451136 events per second 00:04:15.114 00:04:15.114 real 0m1.284s 00:04:15.114 user 0m1.203s 00:04:15.114 sys 0m0.076s 00:04:15.114 13:33:12 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.114 13:33:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:15.114 ************************************ 00:04:15.114 END TEST event_reactor_perf 00:04:15.114 ************************************ 00:04:15.372 13:33:12 event -- event/event.sh@49 -- # uname -s 00:04:15.372 13:33:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:15.372 13:33:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:15.372 13:33:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.372 13:33:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.372 13:33:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.372 ************************************ 00:04:15.372 START TEST event_scheduler 00:04:15.372 ************************************ 00:04:15.372 13:33:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:15.372 * Looking for test storage... 00:04:15.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:15.372 13:33:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:15.372 13:33:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=447264 00:04:15.372 13:33:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:15.372 13:33:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.372 13:33:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 447264 00:04:15.372 13:33:12 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 447264 ']' 00:04:15.372 13:33:12 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.372 13:33:12 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:15.372 13:33:12 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.372 13:33:12 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:15.372 13:33:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.372 [2024-07-25 13:33:12.285565] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:15.372 [2024-07-25 13:33:12.285644] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447264 ] 00:04:15.372 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.372 [2024-07-25 13:33:12.346690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:15.629 [2024-07-25 13:33:12.456838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.629 [2024-07-25 13:33:12.456912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.630 [2024-07-25 13:33:12.456915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:15.630 [2024-07-25 13:33:12.456859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:15.630 13:33:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.630 [2024-07-25 13:33:12.525796] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:15.630 [2024-07-25 13:33:12.525821] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:15.630 [2024-07-25 13:33:12.525854] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:15.630 [2024-07-25 13:33:12.525866] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:15.630 [2024-07-25 13:33:12.525876] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.630 13:33:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.630 [2024-07-25 13:33:12.621108] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.630 13:33:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.630 13:33:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.630 ************************************ 00:04:15.630 START TEST scheduler_create_thread 00:04:15.630 ************************************ 00:04:15.630 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:15.630 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:15.630 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.630 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.630 2 00:04:15.630 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.630 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:15.630 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.630 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.887 3 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 4 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 5 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 6 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 7 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 8 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 9 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 10 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.888 13:33:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.452 13:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.452 00:04:16.452 real 0m0.591s 00:04:16.452 user 0m0.014s 00:04:16.452 sys 0m0.002s 00:04:16.452 13:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.452 13:33:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.452 ************************************ 00:04:16.452 END TEST scheduler_create_thread 00:04:16.452 ************************************ 00:04:16.452 13:33:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:16.452 13:33:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 447264 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 447264 ']' 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 447264 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 447264 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 447264' 00:04:16.452 killing process with pid 447264 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 447264 00:04:16.452 13:33:13 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 447264 00:04:16.708 [2024-07-25 13:33:13.721224] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:16.966 00:04:16.966 real 0m1.784s 00:04:16.966 user 0m2.324s 00:04:16.966 sys 0m0.341s 00:04:16.966 13:33:13 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.966 13:33:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:16.966 ************************************ 00:04:16.966 END TEST event_scheduler 00:04:16.966 ************************************ 00:04:16.966 13:33:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:17.224 13:33:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:17.224 13:33:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.224 13:33:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.224 13:33:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.224 ************************************ 00:04:17.224 START TEST app_repeat 00:04:17.224 ************************************ 00:04:17.224 13:33:14 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=447548 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 447548' 00:04:17.224 Process app_repeat pid: 447548 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:17.224 spdk_app_start Round 0 00:04:17.224 13:33:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 447548 /var/tmp/spdk-nbd.sock 00:04:17.225 13:33:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 447548 ']' 00:04:17.225 13:33:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.225 13:33:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:17.225 13:33:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.225 13:33:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:17.225 13:33:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.225 [2024-07-25 13:33:14.042137] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:17.225 [2024-07-25 13:33:14.042199] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447548 ] 00:04:17.225 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.225 [2024-07-25 13:33:14.102931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.225 [2024-07-25 13:33:14.216101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.225 [2024-07-25 13:33:14.216105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.483 13:33:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:17.483 13:33:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:17.483 13:33:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.740 Malloc0 00:04:17.740 13:33:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.998 Malloc1 00:04:17.998 13:33:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.998 13:33:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:18.255 /dev/nbd0 00:04:18.255 13:33:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.255 13:33:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.255 1+0 records in 00:04:18.255 1+0 records out 00:04:18.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175574 s, 23.3 MB/s 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:18.255 13:33:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:18.255 13:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.255 13:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.255 13:33:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.512 /dev/nbd1 00:04:18.512 13:33:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:18.513 13:33:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.513 1+0 records in 00:04:18.513 1+0 records out 00:04:18.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213486 s, 19.2 MB/s 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:18.513 13:33:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:18.513 13:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.513 13:33:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.513 13:33:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.513 13:33:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.513 13:33:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:18.771 { 00:04:18.771 "nbd_device": "/dev/nbd0", 00:04:18.771 "bdev_name": "Malloc0" 00:04:18.771 }, 00:04:18.771 { 00:04:18.771 "nbd_device": "/dev/nbd1", 00:04:18.771 "bdev_name": "Malloc1" 00:04:18.771 } 00:04:18.771 ]' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:18.771 { 00:04:18.771 "nbd_device": "/dev/nbd0", 00:04:18.771 "bdev_name": "Malloc0" 00:04:18.771 }, 00:04:18.771 { 00:04:18.771 "nbd_device": "/dev/nbd1", 00:04:18.771 "bdev_name": "Malloc1" 00:04:18.771 } 00:04:18.771 ]' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:18.771 /dev/nbd1' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:18.771 /dev/nbd1' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:18.771 256+0 records in 00:04:18.771 256+0 records out 00:04:18.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494313 s, 212 MB/s 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:18.771 256+0 records in 00:04:18.771 256+0 records out 00:04:18.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210305 s, 49.9 MB/s 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:18.771 256+0 records in 00:04:18.771 256+0 records out 00:04:18.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228897 s, 45.8 MB/s 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.771 13:33:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.029 13:33:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.287 13:33:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.544 13:33:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:19.544 13:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:19.544 13:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.801 13:33:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:19.801 13:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:19.801 13:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.801 13:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:19.801 13:33:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:19.801 13:33:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:19.801 13:33:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:19.801 13:33:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:19.801 13:33:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:19.801 13:33:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:20.059 13:33:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:20.317 [2024-07-25 13:33:17.152247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.317 [2024-07-25 13:33:17.252440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.317 [2024-07-25 13:33:17.252440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.317 [2024-07-25 13:33:17.309770] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:20.317 [2024-07-25 13:33:17.309856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.595 13:33:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:23.595 13:33:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:23.595 spdk_app_start Round 1 00:04:23.595 13:33:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 447548 /var/tmp/spdk-nbd.sock 00:04:23.595 13:33:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 447548 ']' 00:04:23.595 13:33:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.595 13:33:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.595 13:33:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.595 13:33:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.595 13:33:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:23.595 13:33:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.595 13:33:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:23.595 13:33:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.595 Malloc0 00:04:23.595 13:33:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.853 Malloc1 00:04:23.853 13:33:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.853 13:33:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.853 /dev/nbd0 00:04:24.111 13:33:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:24.111 13:33:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.111 1+0 records in 00:04:24.111 1+0 records out 00:04:24.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180026 s, 22.8 MB/s 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:24.111 13:33:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:24.111 13:33:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.111 13:33:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.111 13:33:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:24.368 /dev/nbd1 00:04:24.368 13:33:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:24.368 13:33:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:24.368 13:33:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:24.368 13:33:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:24.368 13:33:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:24.368 13:33:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:24.368 13:33:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:24.368 13:33:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:24.368 13:33:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:24.368 13:33:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:24.368 13:33:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:24.368 1+0 records in 00:04:24.368 1+0 records out 00:04:24.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018811 s, 21.8 MB/s 00:04:24.369 13:33:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.369 13:33:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:24.369 13:33:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:24.369 13:33:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:24.369 13:33:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:24.369 13:33:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:24.369 13:33:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:24.369 13:33:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.369 13:33:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.369 13:33:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.625 13:33:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.625 { 00:04:24.625 "nbd_device": "/dev/nbd0", 00:04:24.625 "bdev_name": "Malloc0" 00:04:24.625 }, 00:04:24.625 { 00:04:24.625 "nbd_device": "/dev/nbd1", 00:04:24.625 "bdev_name": "Malloc1" 00:04:24.625 } 00:04:24.625 ]' 00:04:24.625 13:33:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.625 { 00:04:24.625 "nbd_device": "/dev/nbd0", 00:04:24.625 "bdev_name": "Malloc0" 00:04:24.625 }, 00:04:24.625 { 00:04:24.626 "nbd_device": "/dev/nbd1", 00:04:24.626 "bdev_name": "Malloc1" 00:04:24.626 } 00:04:24.626 ]' 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.626 /dev/nbd1' 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.626 /dev/nbd1' 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.626 256+0 records in 00:04:24.626 256+0 records out 00:04:24.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524455 s, 200 MB/s 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.626 256+0 records in 00:04:24.626 256+0 records out 00:04:24.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208541 s, 50.3 MB/s 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.626 256+0 records in 00:04:24.626 256+0 records out 00:04:24.626 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226875 s, 46.2 MB/s 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.626 13:33:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.886 13:33:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.143 13:33:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:25.400 13:33:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:25.400 13:33:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:25.658 13:33:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:25.916 [2024-07-25 13:33:22.906863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.174 [2024-07-25 13:33:23.007546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.174 [2024-07-25 13:33:23.007548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.174 [2024-07-25 13:33:23.065559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:26.174 [2024-07-25 13:33:23.065635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.746 13:33:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.746 13:33:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:28.746 spdk_app_start Round 2 00:04:28.746 13:33:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 447548 /var/tmp/spdk-nbd.sock 00:04:28.746 13:33:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 447548 ']' 00:04:28.746 13:33:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.746 13:33:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.746 13:33:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.746 13:33:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.746 13:33:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:29.003 13:33:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:29.003 13:33:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:29.003 13:33:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.261 Malloc0 00:04:29.261 13:33:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:29.518 Malloc1 00:04:29.518 13:33:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.518 13:33:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:29.775 /dev/nbd0 00:04:29.775 13:33:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:29.775 13:33:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:29.775 13:33:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:29.775 13:33:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:29.775 13:33:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:29.775 13:33:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:29.775 13:33:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:29.775 13:33:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:29.775 13:33:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:29.775 13:33:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:29.776 13:33:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.776 1+0 records in 00:04:29.776 1+0 records out 00:04:29.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000142594 s, 28.7 MB/s 00:04:29.776 13:33:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.776 13:33:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:29.776 13:33:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:29.776 13:33:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:29.776 13:33:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:29.776 13:33:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.776 13:33:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.776 13:33:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:30.033 /dev/nbd1 00:04:30.033 13:33:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:30.033 13:33:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:30.033 1+0 records in 00:04:30.033 1+0 records out 00:04:30.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195315 s, 21.0 MB/s 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:30.033 13:33:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:30.033 13:33:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:30.033 13:33:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.033 13:33:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:30.033 13:33:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.033 13:33:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:30.290 { 00:04:30.290 "nbd_device": "/dev/nbd0", 00:04:30.290 "bdev_name": "Malloc0" 00:04:30.290 }, 00:04:30.290 { 00:04:30.290 "nbd_device": "/dev/nbd1", 00:04:30.290 "bdev_name": "Malloc1" 00:04:30.290 } 00:04:30.290 ]' 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:30.290 { 00:04:30.290 "nbd_device": "/dev/nbd0", 00:04:30.290 "bdev_name": "Malloc0" 00:04:30.290 }, 00:04:30.290 { 00:04:30.290 "nbd_device": "/dev/nbd1", 00:04:30.290 "bdev_name": "Malloc1" 00:04:30.290 } 00:04:30.290 ]' 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:30.290 /dev/nbd1' 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:30.290 /dev/nbd1' 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:30.290 13:33:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:30.290 256+0 records in 00:04:30.291 256+0 records out 00:04:30.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501888 s, 209 MB/s 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:30.291 256+0 records in 00:04:30.291 256+0 records out 00:04:30.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206807 s, 50.7 MB/s 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:30.291 256+0 records in 00:04:30.291 256+0 records out 00:04:30.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231302 s, 45.3 MB/s 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:30.291 13:33:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:30.548 13:33:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:30.548 13:33:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:30.548 13:33:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.548 13:33:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.548 13:33:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:30.548 13:33:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:30.548 13:33:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.548 13:33:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:30.805 13:33:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.062 13:33:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:31.320 13:33:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:31.320 13:33:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:31.578 13:33:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:31.836 [2024-07-25 13:33:28.703679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.836 [2024-07-25 13:33:28.804880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.836 [2024-07-25 13:33:28.804881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.836 [2024-07-25 13:33:28.862892] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:31.836 [2024-07-25 13:33:28.862968] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.124 13:33:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 447548 /var/tmp/spdk-nbd.sock 00:04:35.124 13:33:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 447548 ']' 00:04:35.124 13:33:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.124 13:33:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.124 13:33:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:35.125 13:33:31 event.app_repeat -- event/event.sh@39 -- # killprocess 447548 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 447548 ']' 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 447548 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 447548 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 447548' 00:04:35.125 killing process with pid 447548 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@969 -- # kill 447548 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@974 -- # wait 447548 00:04:35.125 spdk_app_start is called in Round 0. 00:04:35.125 Shutdown signal received, stop current app iteration 00:04:35.125 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:04:35.125 spdk_app_start is called in Round 1. 00:04:35.125 Shutdown signal received, stop current app iteration 00:04:35.125 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:04:35.125 spdk_app_start is called in Round 2. 00:04:35.125 Shutdown signal received, stop current app iteration 00:04:35.125 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:04:35.125 spdk_app_start is called in Round 3. 00:04:35.125 Shutdown signal received, stop current app iteration 00:04:35.125 13:33:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:35.125 13:33:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:35.125 00:04:35.125 real 0m17.941s 00:04:35.125 user 0m39.024s 00:04:35.125 sys 0m3.138s 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.125 13:33:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:35.125 ************************************ 00:04:35.125 END TEST app_repeat 00:04:35.125 ************************************ 00:04:35.125 13:33:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:35.125 13:33:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:35.125 13:33:31 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.125 13:33:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.125 13:33:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.125 ************************************ 00:04:35.125 START TEST cpu_locks 00:04:35.125 ************************************ 00:04:35.125 13:33:32 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:35.125 * Looking for test storage... 00:04:35.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:35.125 13:33:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:35.125 13:33:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:35.125 13:33:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:35.125 13:33:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:35.125 13:33:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.125 13:33:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.125 13:33:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.125 ************************************ 00:04:35.125 START TEST default_locks 00:04:35.125 ************************************ 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=449903 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 449903 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 449903 ']' 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.125 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.125 [2024-07-25 13:33:32.148785] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:35.125 [2024-07-25 13:33:32.148857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid449903 ] 00:04:35.383 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.383 [2024-07-25 13:33:32.205856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.383 [2024-07-25 13:33:32.310208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.641 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.641 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:35.641 13:33:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 449903 00:04:35.641 13:33:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 449903 00:04:35.641 13:33:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:35.898 lslocks: write error 00:04:35.898 13:33:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 449903 00:04:35.898 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 449903 ']' 00:04:35.898 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 449903 00:04:35.898 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:35.898 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.156 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 449903 00:04:36.156 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.156 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.156 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 449903' 00:04:36.156 killing process with pid 449903 00:04:36.156 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 449903 00:04:36.156 13:33:32 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 449903 00:04:36.413 13:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 449903 00:04:36.413 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:36.413 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 449903 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 449903 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 449903 ']' 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (449903) - No such process 00:04:36.414 ERROR: process (pid: 449903) is no longer running 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.414 00:04:36.414 real 0m1.292s 00:04:36.414 user 0m1.240s 00:04:36.414 sys 0m0.529s 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.414 13:33:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.414 ************************************ 00:04:36.414 END TEST default_locks 00:04:36.414 ************************************ 00:04:36.414 13:33:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:36.414 13:33:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.414 13:33:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.414 13:33:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.414 ************************************ 00:04:36.414 START TEST default_locks_via_rpc 00:04:36.414 ************************************ 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=450066 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 450066 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 450066 ']' 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.414 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.672 [2024-07-25 13:33:33.492898] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:36.672 [2024-07-25 13:33:33.492998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450066 ] 00:04:36.672 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.672 [2024-07-25 13:33:33.551927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.672 [2024-07-25 13:33:33.662271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 450066 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 450066 00:04:36.930 13:33:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.187 13:33:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 450066 00:04:37.187 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 450066 ']' 00:04:37.187 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 450066 00:04:37.187 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:37.187 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.187 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 450066 00:04:37.445 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.445 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.445 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 450066' 00:04:37.445 killing process with pid 450066 00:04:37.445 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 450066 00:04:37.445 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 450066 00:04:37.703 00:04:37.703 real 0m1.226s 00:04:37.703 user 0m1.175s 00:04:37.703 sys 0m0.499s 00:04:37.703 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.703 13:33:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.703 ************************************ 00:04:37.703 END TEST default_locks_via_rpc 00:04:37.703 ************************************ 00:04:37.703 13:33:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:37.703 13:33:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.703 13:33:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.703 13:33:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.703 ************************************ 00:04:37.703 START TEST non_locking_app_on_locked_coremask 00:04:37.703 ************************************ 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=450309 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 450309 /var/tmp/spdk.sock 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 450309 ']' 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.703 13:33:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.961 [2024-07-25 13:33:34.766489] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:37.961 [2024-07-25 13:33:34.766595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450309 ] 00:04:37.961 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.961 [2024-07-25 13:33:34.824666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.961 [2024-07-25 13:33:34.935009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.218 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.218 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:38.218 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=450357 00:04:38.218 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:38.218 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 450357 /var/tmp/spdk2.sock 00:04:38.218 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 450357 ']' 00:04:38.218 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.218 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.218 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.219 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.219 13:33:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.219 [2024-07-25 13:33:35.222254] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:38.219 [2024-07-25 13:33:35.222334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450357 ] 00:04:38.219 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.476 [2024-07-25 13:33:35.305486] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:38.476 [2024-07-25 13:33:35.305516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.733 [2024-07-25 13:33:35.514768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.298 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.298 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:39.298 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 450309 00:04:39.298 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.298 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 450309 00:04:39.862 lslocks: write error 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 450309 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 450309 ']' 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 450309 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 450309 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 450309' 00:04:39.862 killing process with pid 450309 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 450309 00:04:39.862 13:33:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 450309 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 450357 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 450357 ']' 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 450357 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 450357 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 450357' 00:04:40.794 killing process with pid 450357 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 450357 00:04:40.794 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 450357 00:04:41.052 00:04:41.052 real 0m3.277s 00:04:41.052 user 0m3.455s 00:04:41.052 sys 0m0.985s 00:04:41.052 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.052 13:33:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.052 ************************************ 00:04:41.052 END TEST non_locking_app_on_locked_coremask 00:04:41.052 ************************************ 00:04:41.052 13:33:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:41.052 13:33:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.052 13:33:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.052 13:33:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.052 ************************************ 00:04:41.052 START TEST locking_app_on_unlocked_coremask 00:04:41.052 ************************************ 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=450679 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 450679 /var/tmp/spdk.sock 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 450679 ']' 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.052 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.309 [2024-07-25 13:33:38.098485] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:41.309 [2024-07-25 13:33:38.098577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450679 ] 00:04:41.309 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.309 [2024-07-25 13:33:38.156087] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:41.309 [2024-07-25 13:33:38.156127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.309 [2024-07-25 13:33:38.265020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=450791 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 450791 /var/tmp/spdk2.sock 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 450791 ']' 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.567 13:33:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.567 [2024-07-25 13:33:38.553293] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:41.567 [2024-07-25 13:33:38.553387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450791 ] 00:04:41.567 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.825 [2024-07-25 13:33:38.635839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.825 [2024-07-25 13:33:38.844739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.757 13:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.757 13:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:42.757 13:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 450791 00:04:42.757 13:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 450791 00:04:42.757 13:33:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:43.014 lslocks: write error 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 450679 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 450679 ']' 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 450679 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 450679 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 450679' 00:04:43.014 killing process with pid 450679 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 450679 00:04:43.014 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 450679 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 450791 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 450791 ']' 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 450791 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 450791 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 450791' 00:04:43.946 killing process with pid 450791 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 450791 00:04:43.946 13:33:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 450791 00:04:44.511 00:04:44.511 real 0m3.276s 00:04:44.511 user 0m3.455s 00:04:44.511 sys 0m1.020s 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.511 ************************************ 00:04:44.511 END TEST locking_app_on_unlocked_coremask 00:04:44.511 ************************************ 00:04:44.511 13:33:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:44.511 13:33:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.511 13:33:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.511 13:33:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.511 ************************************ 00:04:44.511 START TEST locking_app_on_locked_coremask 00:04:44.511 ************************************ 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=451102 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 451102 /var/tmp/spdk.sock 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 451102 ']' 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.511 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.511 [2024-07-25 13:33:41.429811] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:44.511 [2024-07-25 13:33:41.429889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451102 ] 00:04:44.511 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.511 [2024-07-25 13:33:41.489402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.769 [2024-07-25 13:33:41.598946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=451225 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 451225 /var/tmp/spdk2.sock 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 451225 /var/tmp/spdk2.sock 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 451225 /var/tmp/spdk2.sock 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 451225 ']' 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.027 13:33:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:45.027 [2024-07-25 13:33:41.893168] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:45.027 [2024-07-25 13:33:41.893248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451225 ] 00:04:45.027 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.027 [2024-07-25 13:33:41.976927] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 451102 has claimed it. 00:04:45.027 [2024-07-25 13:33:41.976994] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:45.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (451225) - No such process 00:04:45.592 ERROR: process (pid: 451225) is no longer running 00:04:45.592 13:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.592 13:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:45.592 13:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:45.592 13:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:45.592 13:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:45.592 13:33:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:45.592 13:33:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 451102 00:04:45.592 13:33:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 451102 00:04:45.592 13:33:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.156 lslocks: write error 00:04:46.156 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 451102 00:04:46.156 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 451102 ']' 00:04:46.156 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 451102 00:04:46.156 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:46.156 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.156 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451102 00:04:46.156 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.157 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.157 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451102' 00:04:46.157 killing process with pid 451102 00:04:46.157 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 451102 00:04:46.157 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 451102 00:04:46.720 00:04:46.720 real 0m2.096s 00:04:46.720 user 0m2.278s 00:04:46.720 sys 0m0.641s 00:04:46.720 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.720 13:33:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.720 ************************************ 00:04:46.720 END TEST locking_app_on_locked_coremask 00:04:46.720 ************************************ 00:04:46.720 13:33:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:46.720 13:33:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.720 13:33:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.720 13:33:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.720 ************************************ 00:04:46.720 START TEST locking_overlapped_coremask 00:04:46.720 ************************************ 00:04:46.720 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:46.720 13:33:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=451400 00:04:46.720 13:33:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:46.720 13:33:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 451400 /var/tmp/spdk.sock 00:04:46.720 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 451400 ']' 00:04:46.720 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.721 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.721 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.721 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.721 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.721 [2024-07-25 13:33:43.576670] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:46.721 [2024-07-25 13:33:43.576734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451400 ] 00:04:46.721 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.721 [2024-07-25 13:33:43.635109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.721 [2024-07-25 13:33:43.751084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.721 [2024-07-25 13:33:43.751107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.721 [2024-07-25 13:33:43.751110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.978 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.978 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:46.978 13:33:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=451525 00:04:46.978 13:33:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:46.978 13:33:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 451525 /var/tmp/spdk2.sock 00:04:46.978 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:46.978 13:33:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 451525 /var/tmp/spdk2.sock 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 451525 /var/tmp/spdk2.sock 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 451525 ']' 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.978 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.235 [2024-07-25 13:33:44.050986] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:47.235 [2024-07-25 13:33:44.051099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451525 ] 00:04:47.235 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.235 [2024-07-25 13:33:44.138484] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 451400 has claimed it. 00:04:47.235 [2024-07-25 13:33:44.138552] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:47.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (451525) - No such process 00:04:47.798 ERROR: process (pid: 451525) is no longer running 00:04:47.798 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.798 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:47.798 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:47.798 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:47.798 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:47.798 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:47.798 13:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:47.798 13:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 451400 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 451400 ']' 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 451400 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451400 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451400' 00:04:47.799 killing process with pid 451400 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 451400 00:04:47.799 13:33:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 451400 00:04:48.362 00:04:48.362 real 0m1.672s 00:04:48.362 user 0m4.418s 00:04:48.362 sys 0m0.462s 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.362 ************************************ 00:04:48.362 END TEST locking_overlapped_coremask 00:04:48.362 ************************************ 00:04:48.362 13:33:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:48.362 13:33:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.362 13:33:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.362 13:33:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.362 ************************************ 00:04:48.362 START TEST locking_overlapped_coremask_via_rpc 00:04:48.362 ************************************ 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=451693 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 451693 /var/tmp/spdk.sock 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 451693 ']' 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.362 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.362 [2024-07-25 13:33:45.299368] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:48.362 [2024-07-25 13:33:45.299449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451693 ] 00:04:48.362 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.362 [2024-07-25 13:33:45.356081] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.362 [2024-07-25 13:33:45.356114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.619 [2024-07-25 13:33:45.464502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.620 [2024-07-25 13:33:45.464557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.620 [2024-07-25 13:33:45.464560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=451703 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 451703 /var/tmp/spdk2.sock 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 451703 ']' 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.877 13:33:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.877 [2024-07-25 13:33:45.777112] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:48.877 [2024-07-25 13:33:45.777191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451703 ] 00:04:48.877 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.877 [2024-07-25 13:33:45.863611] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.877 [2024-07-25 13:33:45.863653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:49.134 [2024-07-25 13:33:46.082596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.134 [2024-07-25 13:33:46.086095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:49.134 [2024-07-25 13:33:46.086098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.698 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.698 [2024-07-25 13:33:46.731168] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 451693 has claimed it. 00:04:49.955 request: 00:04:49.955 { 00:04:49.955 "method": "framework_enable_cpumask_locks", 00:04:49.955 "req_id": 1 00:04:49.955 } 00:04:49.955 Got JSON-RPC error response 00:04:49.955 response: 00:04:49.955 { 00:04:49.955 "code": -32603, 00:04:49.955 "message": "Failed to claim CPU core: 2" 00:04:49.955 } 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 451693 /var/tmp/spdk.sock 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 451693 ']' 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 451703 /var/tmp/spdk2.sock 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 451703 ']' 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.955 13:33:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.212 13:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.212 13:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:50.212 13:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:50.212 13:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:50.212 13:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:50.212 13:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:50.212 00:04:50.212 real 0m1.999s 00:04:50.212 user 0m1.019s 00:04:50.212 sys 0m0.178s 00:04:50.474 13:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.474 13:33:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.474 ************************************ 00:04:50.474 END TEST locking_overlapped_coremask_via_rpc 00:04:50.474 ************************************ 00:04:50.474 13:33:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:50.474 13:33:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 451693 ]] 00:04:50.474 13:33:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 451693 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 451693 ']' 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 451693 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451693 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451693' 00:04:50.474 killing process with pid 451693 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 451693 00:04:50.474 13:33:47 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 451693 00:04:50.753 13:33:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 451703 ]] 00:04:50.753 13:33:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 451703 00:04:50.753 13:33:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 451703 ']' 00:04:50.753 13:33:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 451703 00:04:50.753 13:33:47 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:50.753 13:33:47 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.753 13:33:47 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 451703 00:04:51.016 13:33:47 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:51.016 13:33:47 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:51.016 13:33:47 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 451703' 00:04:51.016 killing process with pid 451703 00:04:51.016 13:33:47 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 451703 00:04:51.016 13:33:47 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 451703 00:04:51.273 13:33:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:51.273 13:33:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:51.273 13:33:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 451693 ]] 00:04:51.273 13:33:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 451693 00:04:51.273 13:33:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 451693 ']' 00:04:51.273 13:33:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 451693 00:04:51.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (451693) - No such process 00:04:51.273 13:33:48 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 451693 is not found' 00:04:51.273 Process with pid 451693 is not found 00:04:51.273 13:33:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 451703 ]] 00:04:51.273 13:33:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 451703 00:04:51.273 13:33:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 451703 ']' 00:04:51.273 13:33:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 451703 00:04:51.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (451703) - No such process 00:04:51.273 13:33:48 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 451703 is not found' 00:04:51.273 Process with pid 451703 is not found 00:04:51.273 13:33:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:51.273 00:04:51.273 real 0m16.223s 00:04:51.273 user 0m28.102s 00:04:51.273 sys 0m5.214s 00:04:51.273 13:33:48 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.273 13:33:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.273 ************************************ 00:04:51.273 END TEST cpu_locks 00:04:51.273 ************************************ 00:04:51.273 00:04:51.273 real 0m40.161s 00:04:51.273 user 1m16.208s 00:04:51.273 sys 0m9.153s 00:04:51.273 13:33:48 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.273 13:33:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.273 ************************************ 00:04:51.273 END TEST event 00:04:51.273 ************************************ 00:04:51.273 13:33:48 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:51.273 13:33:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.273 13:33:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.273 13:33:48 -- common/autotest_common.sh@10 -- # set +x 00:04:51.273 ************************************ 00:04:51.273 START TEST thread 00:04:51.273 ************************************ 00:04:51.273 13:33:48 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:51.530 * Looking for test storage... 00:04:51.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:51.530 13:33:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:51.530 13:33:48 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:51.530 13:33:48 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.530 13:33:48 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.530 ************************************ 00:04:51.530 START TEST thread_poller_perf 00:04:51.530 ************************************ 00:04:51.530 13:33:48 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:51.530 [2024-07-25 13:33:48.388622] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:51.530 [2024-07-25 13:33:48.388675] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452190 ] 00:04:51.530 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.530 [2024-07-25 13:33:48.445903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.530 [2024-07-25 13:33:48.550151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.530 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:52.899 ====================================== 00:04:52.899 busy:2709961296 (cyc) 00:04:52.899 total_run_count: 368000 00:04:52.899 tsc_hz: 2700000000 (cyc) 00:04:52.899 ====================================== 00:04:52.899 poller_cost: 7364 (cyc), 2727 (nsec) 00:04:52.899 00:04:52.899 real 0m1.288s 00:04:52.899 user 0m1.204s 00:04:52.899 sys 0m0.079s 00:04:52.899 13:33:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.899 13:33:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.899 ************************************ 00:04:52.899 END TEST thread_poller_perf 00:04:52.899 ************************************ 00:04:52.899 13:33:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:52.899 13:33:49 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:52.899 13:33:49 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.899 13:33:49 thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.899 ************************************ 00:04:52.899 START TEST thread_poller_perf 00:04:52.899 ************************************ 00:04:52.899 13:33:49 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:52.899 [2024-07-25 13:33:49.728232] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:52.899 [2024-07-25 13:33:49.728294] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452350 ] 00:04:52.899 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.899 [2024-07-25 13:33:49.787442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.899 [2024-07-25 13:33:49.888948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.899 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:54.269 ====================================== 00:04:54.269 busy:2702074395 (cyc) 00:04:54.269 total_run_count: 4827000 00:04:54.269 tsc_hz: 2700000000 (cyc) 00:04:54.269 ====================================== 00:04:54.269 poller_cost: 559 (cyc), 207 (nsec) 00:04:54.269 00:04:54.269 real 0m1.285s 00:04:54.269 user 0m1.203s 00:04:54.269 sys 0m0.077s 00:04:54.269 13:33:50 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.269 13:33:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.269 ************************************ 00:04:54.269 END TEST thread_poller_perf 00:04:54.269 ************************************ 00:04:54.269 13:33:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:54.269 00:04:54.269 real 0m2.715s 00:04:54.269 user 0m2.461s 00:04:54.269 sys 0m0.253s 00:04:54.269 13:33:51 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.269 13:33:51 thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.269 ************************************ 00:04:54.269 END TEST thread 00:04:54.269 ************************************ 00:04:54.269 13:33:51 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:04:54.269 13:33:51 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:54.269 13:33:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.269 13:33:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.269 13:33:51 -- common/autotest_common.sh@10 -- # set +x 00:04:54.269 ************************************ 00:04:54.269 START TEST app_cmdline 00:04:54.269 ************************************ 00:04:54.269 13:33:51 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:54.269 * Looking for test storage... 00:04:54.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:54.270 13:33:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:54.270 13:33:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=452546 00:04:54.270 13:33:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:54.270 13:33:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 452546 00:04:54.270 13:33:51 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 452546 ']' 00:04:54.270 13:33:51 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.270 13:33:51 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.270 13:33:51 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.270 13:33:51 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.270 13:33:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:54.270 [2024-07-25 13:33:51.172760] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:54.270 [2024-07-25 13:33:51.172863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452546 ] 00:04:54.270 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.270 [2024-07-25 13:33:51.230487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.527 [2024-07-25 13:33:51.336508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.785 13:33:51 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.785 13:33:51 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:04:54.785 13:33:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:54.785 { 00:04:54.785 "version": "SPDK v24.09-pre git sha1 704257090", 00:04:54.785 "fields": { 00:04:54.785 "major": 24, 00:04:54.785 "minor": 9, 00:04:54.785 "patch": 0, 00:04:54.785 "suffix": "-pre", 00:04:54.785 "commit": "704257090" 00:04:54.785 } 00:04:54.785 } 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:55.043 13:33:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:55.043 13:33:51 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:55.300 request: 00:04:55.300 { 00:04:55.300 "method": "env_dpdk_get_mem_stats", 00:04:55.300 "req_id": 1 00:04:55.300 } 00:04:55.300 Got JSON-RPC error response 00:04:55.300 response: 00:04:55.300 { 00:04:55.300 "code": -32601, 00:04:55.300 "message": "Method not found" 00:04:55.300 } 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.300 13:33:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 452546 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 452546 ']' 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 452546 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 452546 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.300 13:33:52 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.301 13:33:52 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 452546' 00:04:55.301 killing process with pid 452546 00:04:55.301 13:33:52 app_cmdline -- common/autotest_common.sh@969 -- # kill 452546 00:04:55.301 13:33:52 app_cmdline -- common/autotest_common.sh@974 -- # wait 452546 00:04:55.619 00:04:55.619 real 0m1.512s 00:04:55.619 user 0m1.834s 00:04:55.619 sys 0m0.439s 00:04:55.619 13:33:52 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.619 13:33:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:55.619 ************************************ 00:04:55.619 END TEST app_cmdline 00:04:55.619 ************************************ 00:04:55.619 13:33:52 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:55.619 13:33:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.619 13:33:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.619 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:04:55.619 ************************************ 00:04:55.619 START TEST version 00:04:55.619 ************************************ 00:04:55.619 13:33:52 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:55.876 * Looking for test storage... 00:04:55.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:55.876 13:33:52 version -- app/version.sh@17 -- # get_header_version major 00:04:55.876 13:33:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:55.876 13:33:52 version -- app/version.sh@14 -- # cut -f2 00:04:55.876 13:33:52 version -- app/version.sh@14 -- # tr -d '"' 00:04:55.876 13:33:52 version -- app/version.sh@17 -- # major=24 00:04:55.876 13:33:52 version -- app/version.sh@18 -- # get_header_version minor 00:04:55.876 13:33:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:55.876 13:33:52 version -- app/version.sh@14 -- # cut -f2 00:04:55.876 13:33:52 version -- app/version.sh@14 -- # tr -d '"' 00:04:55.876 13:33:52 version -- app/version.sh@18 -- # minor=9 00:04:55.876 13:33:52 version -- app/version.sh@19 -- # get_header_version patch 00:04:55.876 13:33:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:55.876 13:33:52 version -- app/version.sh@14 -- # cut -f2 00:04:55.876 13:33:52 version -- app/version.sh@14 -- # tr -d '"' 00:04:55.876 13:33:52 version -- app/version.sh@19 -- # patch=0 00:04:55.876 13:33:52 version -- app/version.sh@20 -- # get_header_version suffix 00:04:55.876 13:33:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:55.876 13:33:52 version -- app/version.sh@14 -- # cut -f2 00:04:55.876 13:33:52 version -- app/version.sh@14 -- # tr -d '"' 00:04:55.876 13:33:52 version -- app/version.sh@20 -- # suffix=-pre 00:04:55.876 13:33:52 version -- app/version.sh@22 -- # version=24.9 00:04:55.876 13:33:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:55.876 13:33:52 version -- app/version.sh@28 -- # version=24.9rc0 00:04:55.877 13:33:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:55.877 13:33:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:55.877 13:33:52 version -- app/version.sh@30 -- # py_version=24.9rc0 00:04:55.877 13:33:52 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:04:55.877 00:04:55.877 real 0m0.115s 00:04:55.877 user 0m0.062s 00:04:55.877 sys 0m0.075s 00:04:55.877 13:33:52 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.877 13:33:52 version -- common/autotest_common.sh@10 -- # set +x 00:04:55.877 ************************************ 00:04:55.877 END TEST version 00:04:55.877 ************************************ 00:04:55.877 13:33:52 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:04:55.877 13:33:52 -- spdk/autotest.sh@202 -- # uname -s 00:04:55.877 13:33:52 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:04:55.877 13:33:52 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:04:55.877 13:33:52 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:04:55.877 13:33:52 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:04:55.877 13:33:52 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:04:55.877 13:33:52 -- spdk/autotest.sh@264 -- # timing_exit lib 00:04:55.877 13:33:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.877 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:04:55.877 13:33:52 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:04:55.877 13:33:52 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:04:55.877 13:33:52 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:04:55.877 13:33:52 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:04:55.877 13:33:52 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:04:55.877 13:33:52 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:04:55.877 13:33:52 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:55.877 13:33:52 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:55.877 13:33:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.877 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:04:55.877 ************************************ 00:04:55.877 START TEST nvmf_tcp 00:04:55.877 ************************************ 00:04:55.877 13:33:52 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:55.877 * Looking for test storage... 00:04:55.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:55.877 13:33:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:55.877 13:33:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:55.877 13:33:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:55.877 13:33:52 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:55.877 13:33:52 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.877 13:33:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.877 ************************************ 00:04:55.877 START TEST nvmf_target_core 00:04:55.877 ************************************ 00:04:55.877 13:33:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:56.135 * Looking for test storage... 00:04:56.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.135 13:33:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:56.136 ************************************ 00:04:56.136 START TEST nvmf_abort 00:04:56.136 ************************************ 00:04:56.136 13:33:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:56.136 * Looking for test storage... 00:04:56.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:56.136 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:56.137 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:04:56.137 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:04:56.137 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:04:56.137 13:33:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:04:58.665 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:04:58.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:04:58.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:04:58.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:04:58.665 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:04:58.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:58.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:04:58.666 00:04:58.666 --- 10.0.0.2 ping statistics --- 00:04:58.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:58.666 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:58.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:58.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:04:58.666 00:04:58.666 --- 10.0.0.1 ping statistics --- 00:04:58.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:58.666 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=454593 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 454593 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 454593 ']' 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.666 [2024-07-25 13:33:55.325615] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:04:58.666 [2024-07-25 13:33:55.325689] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:58.666 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.666 [2024-07-25 13:33:55.389446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.666 [2024-07-25 13:33:55.492437] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:58.666 [2024-07-25 13:33:55.492494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:58.666 [2024-07-25 13:33:55.492522] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:58.666 [2024-07-25 13:33:55.492533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:58.666 [2024-07-25 13:33:55.492542] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:58.666 [2024-07-25 13:33:55.492625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.666 [2024-07-25 13:33:55.492688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.666 [2024-07-25 13:33:55.492691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.666 [2024-07-25 13:33:55.634334] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.666 Malloc0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.666 Delay0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.666 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.667 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:58.667 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.667 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.667 [2024-07-25 13:33:55.697326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:58.924 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.924 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:58.924 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.924 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.924 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.924 13:33:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:58.924 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.924 [2024-07-25 13:33:55.835147] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:01.446 Initializing NVMe Controllers 00:05:01.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:01.446 controller IO queue size 128 less than required 00:05:01.446 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:01.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:01.446 Initialization complete. Launching workers. 00:05:01.446 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 32887 00:05:01.446 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32952, failed to submit 62 00:05:01.446 success 32891, unsuccess 61, failed 0 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:05:01.446 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:01.447 13:33:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:01.447 rmmod nvme_tcp 00:05:01.447 rmmod nvme_fabrics 00:05:01.447 rmmod nvme_keyring 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 454593 ']' 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 454593 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 454593 ']' 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 454593 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 454593 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 454593' 00:05:01.447 killing process with pid 454593 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 454593 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 454593 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:01.447 13:33:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:03.354 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:03.354 00:05:03.354 real 0m7.400s 00:05:03.354 user 0m10.710s 00:05:03.354 sys 0m2.603s 00:05:03.354 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.354 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.354 ************************************ 00:05:03.354 END TEST nvmf_abort 00:05:03.354 ************************************ 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:03.613 ************************************ 00:05:03.613 START TEST nvmf_ns_hotplug_stress 00:05:03.613 ************************************ 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:03.613 * Looking for test storage... 00:05:03.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:03.613 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:03.614 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:03.614 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:03.614 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:03.614 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:03.614 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:03.614 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:03.614 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:03.614 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:05:03.614 13:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:06.143 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:06.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:06.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:06.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:06.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:06.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:06.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:05:06.144 00:05:06.144 --- 10.0.0.2 ping statistics --- 00:05:06.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:06.144 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:06.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:06.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:05:06.144 00:05:06.144 --- 10.0.0.1 ping statistics --- 00:05:06.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:06.144 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=456827 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 456827 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 456827 ']' 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.144 13:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:06.144 [2024-07-25 13:34:02.823817] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:06.145 [2024-07-25 13:34:02.823896] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:06.145 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.145 [2024-07-25 13:34:02.884420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.145 [2024-07-25 13:34:02.983838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:06.145 [2024-07-25 13:34:02.983893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:06.145 [2024-07-25 13:34:02.983922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:06.145 [2024-07-25 13:34:02.983933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:06.145 [2024-07-25 13:34:02.983942] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:06.145 [2024-07-25 13:34:02.984025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.145 [2024-07-25 13:34:02.984092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.145 [2024-07-25 13:34:02.984096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.145 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.145 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:06.145 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:06.145 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.145 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:06.145 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:06.145 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:06.145 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:06.402 [2024-07-25 13:34:03.345741] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.402 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:06.659 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:06.917 [2024-07-25 13:34:03.845613] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:06.917 13:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:07.174 13:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:07.432 Malloc0 00:05:07.432 13:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:07.690 Delay0 00:05:07.690 13:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.947 13:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:08.204 NULL1 00:05:08.204 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:08.461 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=457236 00:05:08.461 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:08.461 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:08.461 13:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.461 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.832 Read completed with error (sct=0, sc=11) 00:05:09.832 13:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.832 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.089 13:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:10.089 13:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:10.089 true 00:05:10.346 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:10.346 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.910 13:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.168 13:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:11.168 13:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:11.425 true 00:05:11.425 13:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:11.425 13:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.682 13:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.938 13:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:11.938 13:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:12.195 true 00:05:12.195 13:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:12.195 13:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.125 13:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.381 13:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:13.381 13:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:13.639 true 00:05:13.639 13:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:13.639 13:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.896 13:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.154 13:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:14.154 13:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:14.411 true 00:05:14.411 13:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:14.411 13:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.351 13:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.610 13:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:15.610 13:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:15.867 true 00:05:15.867 13:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:15.867 13:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.125 13:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.382 13:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:16.382 13:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:16.382 true 00:05:16.382 13:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:16.382 13:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.782 13:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.782 13:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:17.782 13:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:18.039 true 00:05:18.039 13:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:18.039 13:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.296 13:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.553 13:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:18.553 13:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:18.810 true 00:05:18.810 13:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:18.810 13:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.743 13:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.743 13:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:19.743 13:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:20.000 true 00:05:20.000 13:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:20.000 13:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.257 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.514 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:20.514 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:20.772 true 00:05:20.772 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:20.772 13:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.704 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.962 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:21.962 13:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:22.219 true 00:05:22.219 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:22.219 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.476 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.734 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:22.734 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:22.992 true 00:05:22.992 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:22.992 13:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.291 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.292 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:23.292 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:23.549 true 00:05:23.549 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:23.549 13:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.920 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.177 13:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:25.178 13:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:25.435 true 00:05:25.435 13:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:25.435 13:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.692 13:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.949 13:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:25.949 13:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:26.206 true 00:05:26.206 13:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:26.206 13:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.137 13:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.137 13:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:27.137 13:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:27.394 true 00:05:27.394 13:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:27.394 13:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.650 13:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.907 13:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:27.907 13:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:28.164 true 00:05:28.164 13:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:28.164 13:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.095 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.351 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:29.351 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:29.607 true 00:05:29.607 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:29.607 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.864 13:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.121 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:30.121 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:30.379 true 00:05:30.379 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:30.379 13:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.310 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.565 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:31.565 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:31.822 true 00:05:31.822 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:31.822 13:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.078 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.334 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:32.334 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:32.591 true 00:05:32.591 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:32.591 13:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.522 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.779 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:33.779 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:34.036 true 00:05:34.036 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:34.036 13:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.293 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.550 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:34.550 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:34.807 true 00:05:34.807 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:34.807 13:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.739 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.996 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:35.996 13:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:36.254 true 00:05:36.254 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:36.254 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.511 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.767 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:36.767 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:36.767 true 00:05:36.767 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:36.767 13:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.697 13:34:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.955 13:34:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:37.955 13:34:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:38.213 true 00:05:38.213 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:38.213 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.470 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.727 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:38.727 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:38.727 Initializing NVMe Controllers 00:05:38.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:38.727 Controller IO queue size 128, less than required. 00:05:38.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:38.727 Controller IO queue size 128, less than required. 00:05:38.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:38.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:38.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:38.727 Initialization complete. Launching workers. 00:05:38.727 ======================================================== 00:05:38.727 Latency(us) 00:05:38.727 Device Information : IOPS MiB/s Average min max 00:05:38.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 968.76 0.47 69891.73 2878.15 1013896.49 00:05:38.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11155.23 5.45 11474.86 3914.86 368930.42 00:05:38.727 ======================================================== 00:05:38.727 Total : 12123.99 5.92 16142.61 2878.15 1013896.49 00:05:38.727 00:05:38.984 true 00:05:38.984 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 457236 00:05:38.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (457236) - No such process 00:05:38.984 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 457236 00:05:38.984 13:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.288 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:39.573 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:39.573 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:39.573 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:39.573 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.573 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:39.830 null0 00:05:39.830 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.830 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.830 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:40.087 null1 00:05:40.087 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.087 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.087 13:34:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:40.087 null2 00:05:40.345 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.345 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.345 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:40.345 null3 00:05:40.345 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.345 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.345 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:40.603 null4 00:05:40.603 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.603 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.603 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:40.861 null5 00:05:40.861 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.861 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.861 13:34:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:41.118 null6 00:05:41.118 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:41.118 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:41.118 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:41.376 null7 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.376 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 461184 461185 461186 461189 461191 461193 461195 461197 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.377 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.634 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.634 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.634 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.634 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.635 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.635 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.892 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.892 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.892 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.892 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.893 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.893 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.893 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.893 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.893 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.893 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.893 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.151 13:34:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.408 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.409 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.409 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.409 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.409 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.409 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.409 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.409 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.666 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.666 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.666 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.667 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.925 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.925 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.925 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.925 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.925 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.925 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.925 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.925 13:34:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.183 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.440 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.440 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.440 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.440 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.440 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.440 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.440 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.440 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.698 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.956 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.956 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.956 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.956 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.956 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.956 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.956 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.956 13:34:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.213 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.471 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.471 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.471 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.471 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.471 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.471 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.471 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.471 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.729 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.986 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.986 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.986 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.986 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.986 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.986 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.986 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.986 13:34:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.244 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:45.502 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.502 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:45.502 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.502 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:45.502 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.502 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:45.502 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:45.502 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.761 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.019 13:34:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.277 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.277 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.277 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.277 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.277 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.277 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.277 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.277 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.541 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.542 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.802 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.802 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.802 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.802 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.802 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.802 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.802 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.802 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:47.061 rmmod nvme_tcp 00:05:47.061 rmmod nvme_fabrics 00:05:47.061 rmmod nvme_keyring 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 456827 ']' 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 456827 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 456827 ']' 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 456827 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.061 13:34:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 456827 00:05:47.061 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:47.061 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:47.061 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 456827' 00:05:47.061 killing process with pid 456827 00:05:47.061 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 456827 00:05:47.061 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 456827 00:05:47.320 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:47.320 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:47.320 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:47.320 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:47.320 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:47.320 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.320 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:47.320 13:34:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:49.858 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:49.858 00:05:49.858 real 0m45.907s 00:05:49.858 user 3m28.782s 00:05:49.858 sys 0m16.428s 00:05:49.858 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.858 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:49.858 ************************************ 00:05:49.858 END TEST nvmf_ns_hotplug_stress 00:05:49.859 ************************************ 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:49.859 ************************************ 00:05:49.859 START TEST nvmf_delete_subsystem 00:05:49.859 ************************************ 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:49.859 * Looking for test storage... 00:05:49.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:49.859 13:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:51.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:51.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:51.759 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:51.760 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:51.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:51.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:51.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:05:51.760 00:05:51.760 --- 10.0.0.2 ping statistics --- 00:05:51.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:51.760 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:51.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:51.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:05:51.760 00:05:51.760 --- 10.0.0.1 ping statistics --- 00:05:51.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:51.760 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=463949 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 463949 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 463949 ']' 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.760 13:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.760 [2024-07-25 13:34:48.730403] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:51.760 [2024-07-25 13:34:48.730500] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:51.760 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.018 [2024-07-25 13:34:48.794347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.018 [2024-07-25 13:34:48.905855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:52.018 [2024-07-25 13:34:48.905917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:52.018 [2024-07-25 13:34:48.905931] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.018 [2024-07-25 13:34:48.905942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.018 [2024-07-25 13:34:48.905967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:52.018 [2024-07-25 13:34:48.906069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.018 [2024-07-25 13:34:48.906073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 [2024-07-25 13:34:49.040499] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.018 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.275 [2024-07-25 13:34:49.056755] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.275 NULL1 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.275 Delay0 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=464096 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:52.275 13:34:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:52.275 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.275 [2024-07-25 13:34:49.131348] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:54.171 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:54.171 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.171 13:34:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 starting I/O failed: -6 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Write completed with error (sct=0, sc=8) 00:05:54.429 Write completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 starting I/O failed: -6 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Write completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Write completed with error (sct=0, sc=8) 00:05:54.429 starting I/O failed: -6 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Write completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 starting I/O failed: -6 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 starting I/O failed: -6 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.429 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 [2024-07-25 13:34:51.354637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc5800d660 is same with the state(5) to be set 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 [2024-07-25 13:34:51.355454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc58000c00 is same with the state(5) to be set 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Write completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 starting I/O failed: -6 00:05:54.430 Read completed with error (sct=0, sc=8) 00:05:54.430 [2024-07-25 13:34:51.356006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19dd8f0 is same with the state(5) to be set 00:05:54.430 starting I/O failed: -6 00:05:54.430 starting I/O failed: -6 00:05:54.430 starting I/O failed: -6 00:05:54.431 starting I/O failed: -6 00:05:54.431 starting I/O failed: -6 00:05:55.363 [2024-07-25 13:34:52.311597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19deac0 is same with the state(5) to be set 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 [2024-07-25 13:34:52.357745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ddc20 is same with the state(5) to be set 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 [2024-07-25 13:34:52.358653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19dd3e0 is same with the state(5) to be set 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 [2024-07-25 13:34:52.358884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19dd5c0 is same with the state(5) to be set 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 Write completed with error (sct=0, sc=8) 00:05:55.363 Read completed with error (sct=0, sc=8) 00:05:55.363 [2024-07-25 13:34:52.359025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efc5800d330 is same with the state(5) to be set 00:05:55.363 Initializing NVMe Controllers 00:05:55.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:55.363 Controller IO queue size 128, less than required. 00:05:55.363 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:55.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:55.363 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:55.363 Initialization complete. Launching workers. 00:05:55.363 ======================================================== 00:05:55.363 Latency(us) 00:05:55.364 Device Information : IOPS MiB/s Average min max 00:05:55.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.10 0.09 959805.88 706.41 1013337.36 00:05:55.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.35 0.07 882967.08 416.24 1011442.71 00:05:55.364 ======================================================== 00:05:55.364 Total : 339.45 0.17 925093.61 416.24 1013337.36 00:05:55.364 00:05:55.364 [2024-07-25 13:34:52.360206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19deac0 (9): Bad file descriptor 00:05:55.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:55.364 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.364 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:55.364 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 464096 00:05:55.364 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 464096 00:05:55.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (464096) - No such process 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 464096 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 464096 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 464096 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.929 [2024-07-25 13:34:52.882254] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=464506 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464506 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:55.929 13:34:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.929 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.929 [2024-07-25 13:34:52.937468] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:56.493 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.493 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464506 00:05:56.493 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.057 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:57.057 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464506 00:05:57.057 13:34:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.621 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:57.621 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464506 00:05:57.621 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.876 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:57.876 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464506 00:05:57.876 13:34:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.438 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.438 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464506 00:05:58.438 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.002 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.002 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464506 00:05:59.002 13:34:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.259 Initializing NVMe Controllers 00:05:59.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:59.259 Controller IO queue size 128, less than required. 00:05:59.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:59.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:59.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:59.259 Initialization complete. Launching workers. 00:05:59.259 ======================================================== 00:05:59.259 Latency(us) 00:05:59.259 Device Information : IOPS MiB/s Average min max 00:05:59.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003570.68 1000161.57 1042560.46 00:05:59.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005315.71 1000162.92 1042722.18 00:05:59.259 ======================================================== 00:05:59.259 Total : 256.00 0.12 1004443.19 1000161.57 1042722.18 00:05:59.259 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 464506 00:05:59.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (464506) - No such process 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 464506 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:59.517 rmmod nvme_tcp 00:05:59.517 rmmod nvme_fabrics 00:05:59.517 rmmod nvme_keyring 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 463949 ']' 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 463949 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 463949 ']' 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 463949 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 463949 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 463949' 00:05:59.517 killing process with pid 463949 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 463949 00:05:59.517 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 463949 00:05:59.777 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:59.777 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:59.777 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:59.777 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:59.777 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:59.777 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.777 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.777 13:34:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:02.311 00:06:02.311 real 0m12.396s 00:06:02.311 user 0m28.021s 00:06:02.311 sys 0m3.094s 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.311 ************************************ 00:06:02.311 END TEST nvmf_delete_subsystem 00:06:02.311 ************************************ 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:02.311 ************************************ 00:06:02.311 START TEST nvmf_host_management 00:06:02.311 ************************************ 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:02.311 * Looking for test storage... 00:06:02.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.311 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:06:02.312 13:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:06:04.227 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:04.228 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:04.228 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:04.228 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:04.228 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.228 13:35:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:04.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:06:04.228 00:06:04.228 --- 10.0.0.2 ping statistics --- 00:06:04.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.228 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:06:04.228 00:06:04.228 --- 10.0.0.1 ping statistics --- 00:06:04.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.228 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.228 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=466846 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 466846 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 466846 ']' 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.229 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.229 [2024-07-25 13:35:01.182152] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:04.229 [2024-07-25 13:35:01.182230] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.229 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.229 [2024-07-25 13:35:01.246527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.488 [2024-07-25 13:35:01.359034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.488 [2024-07-25 13:35:01.359095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.488 [2024-07-25 13:35:01.359111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.488 [2024-07-25 13:35:01.359123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.488 [2024-07-25 13:35:01.359133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.488 [2024-07-25 13:35:01.359194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.488 [2024-07-25 13:35:01.359348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.488 [2024-07-25 13:35:01.359398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:04.488 [2024-07-25 13:35:01.359401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.488 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.488 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:04.488 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:04.488 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.488 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.488 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.488 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:04.488 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.488 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.488 [2024-07-25 13:35:01.515698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.746 Malloc0 00:06:04.746 [2024-07-25 13:35:01.576677] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=467016 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 467016 /var/tmp/bdevperf.sock 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 467016 ']' 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:04.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.746 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:04.746 { 00:06:04.747 "params": { 00:06:04.747 "name": "Nvme$subsystem", 00:06:04.747 "trtype": "$TEST_TRANSPORT", 00:06:04.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:04.747 "adrfam": "ipv4", 00:06:04.747 "trsvcid": "$NVMF_PORT", 00:06:04.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:04.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:04.747 "hdgst": ${hdgst:-false}, 00:06:04.747 "ddgst": ${ddgst:-false} 00:06:04.747 }, 00:06:04.747 "method": "bdev_nvme_attach_controller" 00:06:04.747 } 00:06:04.747 EOF 00:06:04.747 )") 00:06:04.747 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:04.747 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:04.747 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:04.747 13:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:04.747 "params": { 00:06:04.747 "name": "Nvme0", 00:06:04.747 "trtype": "tcp", 00:06:04.747 "traddr": "10.0.0.2", 00:06:04.747 "adrfam": "ipv4", 00:06:04.747 "trsvcid": "4420", 00:06:04.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:04.747 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:04.747 "hdgst": false, 00:06:04.747 "ddgst": false 00:06:04.747 }, 00:06:04.747 "method": "bdev_nvme_attach_controller" 00:06:04.747 }' 00:06:04.747 [2024-07-25 13:35:01.651280] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:04.747 [2024-07-25 13:35:01.651382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467016 ] 00:06:04.747 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.747 [2024-07-25 13:35:01.712473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.005 [2024-07-25 13:35:01.822616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.005 Running I/O for 10 seconds... 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:05.265 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:05.526 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:05.527 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.527 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.527 [2024-07-25 13:35:02.407470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.407807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bba650 is same with the state(5) to be set 00:06:05.527 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.527 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:05.527 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.527 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.527 [2024-07-25 13:35:02.415270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.527 [2024-07-25 13:35:02.415312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.527 [2024-07-25 13:35:02.415345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.527 [2024-07-25 13:35:02.415379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.527 [2024-07-25 13:35:02.415406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e3790 is same with the state(5) to be set 00:06:05.527 [2024-07-25 13:35:02.415753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.415776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.415817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.415861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.415897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.415925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.415953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.415981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.415996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.416009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.416024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.416052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.416080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.416095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.416111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.416126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.416141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.416156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.416173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.416187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.416202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.416216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.416231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.416244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.416259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.527 [2024-07-25 13:35:02.416277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.527 [2024-07-25 13:35:02.416292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.416979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.416994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.528 [2024-07-25 13:35:02.417374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.528 [2024-07-25 13:35:02.417390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.529 [2024-07-25 13:35:02.417760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.529 [2024-07-25 13:35:02.417853] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bf45a0 was disconnected and freed. reset controller. 00:06:05.529 [2024-07-25 13:35:02.418974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:05.529 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.529 13:35:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:05.529 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:05.529 00:06:05.529 Latency(us) 00:06:05.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:05.529 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:05.529 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:05.529 Verification LBA range: start 0x0 length 0x400 00:06:05.529 Nvme0n1 : 0.41 1574.27 98.39 157.43 0.00 35903.53 2936.98 33981.63 00:06:05.529 =================================================================================================================== 00:06:05.529 Total : 1574.27 98.39 157.43 0.00 35903.53 2936.98 33981.63 00:06:05.529 [2024-07-25 13:35:02.420854] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.529 [2024-07-25 13:35:02.420881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e3790 (9): Bad file descriptor 00:06:05.529 [2024-07-25 13:35:02.431532] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 467016 00:06:06.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (467016) - No such process 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:06.474 { 00:06:06.474 "params": { 00:06:06.474 "name": "Nvme$subsystem", 00:06:06.474 "trtype": "$TEST_TRANSPORT", 00:06:06.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:06.474 "adrfam": "ipv4", 00:06:06.474 "trsvcid": "$NVMF_PORT", 00:06:06.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:06.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:06.474 "hdgst": ${hdgst:-false}, 00:06:06.474 "ddgst": ${ddgst:-false} 00:06:06.474 }, 00:06:06.474 "method": "bdev_nvme_attach_controller" 00:06:06.474 } 00:06:06.474 EOF 00:06:06.474 )") 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:06.474 13:35:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:06.474 "params": { 00:06:06.474 "name": "Nvme0", 00:06:06.474 "trtype": "tcp", 00:06:06.474 "traddr": "10.0.0.2", 00:06:06.474 "adrfam": "ipv4", 00:06:06.474 "trsvcid": "4420", 00:06:06.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:06.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:06.474 "hdgst": false, 00:06:06.474 "ddgst": false 00:06:06.474 }, 00:06:06.474 "method": "bdev_nvme_attach_controller" 00:06:06.474 }' 00:06:06.474 [2024-07-25 13:35:03.468272] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:06.474 [2024-07-25 13:35:03.468343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467172 ] 00:06:06.474 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.737 [2024-07-25 13:35:03.528634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.737 [2024-07-25 13:35:03.638515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.996 Running I/O for 1 seconds... 00:06:08.377 00:06:08.377 Latency(us) 00:06:08.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:08.377 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:08.377 Verification LBA range: start 0x0 length 0x400 00:06:08.377 Nvme0n1 : 1.03 1740.06 108.75 0.00 0.00 36090.59 5145.79 37865.24 00:06:08.377 =================================================================================================================== 00:06:08.377 Total : 1740.06 108.75 0.00 0.00 36090.59 5145.79 37865.24 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:08.377 rmmod nvme_tcp 00:06:08.377 rmmod nvme_fabrics 00:06:08.377 rmmod nvme_keyring 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 466846 ']' 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 466846 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 466846 ']' 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 466846 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 466846 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 466846' 00:06:08.377 killing process with pid 466846 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 466846 00:06:08.377 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 466846 00:06:08.636 [2024-07-25 13:35:05.601970] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:08.636 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:08.636 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:08.636 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:08.636 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:08.636 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:08.636 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.636 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.636 13:35:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.172 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:11.172 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:11.172 00:06:11.172 real 0m8.845s 00:06:11.172 user 0m20.064s 00:06:11.172 sys 0m2.646s 00:06:11.172 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.172 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.172 ************************************ 00:06:11.172 END TEST nvmf_host_management 00:06:11.172 ************************************ 00:06:11.172 13:35:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:11.172 13:35:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:11.172 13:35:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.172 13:35:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:11.172 ************************************ 00:06:11.172 START TEST nvmf_lvol 00:06:11.172 ************************************ 00:06:11.172 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:11.172 * Looking for test storage... 00:06:11.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:06:11.173 13:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:13.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:13.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:13.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:13.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:13.079 13:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:13.079 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.080 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.080 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.080 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.080 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:13.080 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.080 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:13.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:06:13.338 00:06:13.338 --- 10.0.0.2 ping statistics --- 00:06:13.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.338 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:06:13.338 00:06:13.338 --- 10.0.0.1 ping statistics --- 00:06:13.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.338 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=469381 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 469381 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 469381 ']' 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.338 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:13.338 [2024-07-25 13:35:10.207252] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:13.338 [2024-07-25 13:35:10.207333] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.338 [2024-07-25 13:35:10.271827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.596 [2024-07-25 13:35:10.382018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.596 [2024-07-25 13:35:10.382097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.596 [2024-07-25 13:35:10.382113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.596 [2024-07-25 13:35:10.382126] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.596 [2024-07-25 13:35:10.382136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.596 [2024-07-25 13:35:10.382192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.596 [2024-07-25 13:35:10.382220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.596 [2024-07-25 13:35:10.382224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.596 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.596 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:13.596 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:13.596 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.596 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:13.596 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.596 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:13.853 [2024-07-25 13:35:10.765128] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.853 13:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:14.110 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:14.110 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:14.367 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:14.367 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:14.624 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:14.881 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3404c260-577c-468f-bc87-d35e30db0749 00:06:14.881 13:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3404c260-577c-468f-bc87-d35e30db0749 lvol 20 00:06:15.138 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=859114c7-6cfc-4371-b75c-9bd09b69cd6e 00:06:15.138 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:15.394 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 859114c7-6cfc-4371-b75c-9bd09b69cd6e 00:06:15.652 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:15.908 [2024-07-25 13:35:12.822565] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.909 13:35:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.166 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=469804 00:06:16.166 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:16.166 13:35:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:16.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.102 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 859114c7-6cfc-4371-b75c-9bd09b69cd6e MY_SNAPSHOT 00:06:17.360 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4a69a204-569d-4d89-b7e9-5512f1b67774 00:06:17.360 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 859114c7-6cfc-4371-b75c-9bd09b69cd6e 30 00:06:17.929 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4a69a204-569d-4d89-b7e9-5512f1b67774 MY_CLONE 00:06:18.189 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=63a7c72b-5758-4a73-9dd3-e80d452aeef8 00:06:18.189 13:35:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 63a7c72b-5758-4a73-9dd3-e80d452aeef8 00:06:18.756 13:35:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 469804 00:06:26.870 Initializing NVMe Controllers 00:06:26.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:26.870 Controller IO queue size 128, less than required. 00:06:26.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:26.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:26.870 Initialization complete. Launching workers. 00:06:26.870 ======================================================== 00:06:26.870 Latency(us) 00:06:26.870 Device Information : IOPS MiB/s Average min max 00:06:26.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10571.30 41.29 12108.07 296.96 94482.93 00:06:26.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10633.60 41.54 12043.85 2013.21 71881.22 00:06:26.870 ======================================================== 00:06:26.870 Total : 21204.90 82.83 12075.86 296.96 94482.93 00:06:26.870 00:06:26.870 13:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:26.870 13:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 859114c7-6cfc-4371-b75c-9bd09b69cd6e 00:06:27.128 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3404c260-577c-468f-bc87-d35e30db0749 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:27.387 rmmod nvme_tcp 00:06:27.387 rmmod nvme_fabrics 00:06:27.387 rmmod nvme_keyring 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 469381 ']' 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 469381 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 469381 ']' 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 469381 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 469381 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 469381' 00:06:27.387 killing process with pid 469381 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 469381 00:06:27.387 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 469381 00:06:27.955 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:27.955 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:27.955 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:27.955 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:27.955 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:27.955 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.955 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.955 13:35:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:29.860 00:06:29.860 real 0m19.020s 00:06:29.860 user 1m4.218s 00:06:29.860 sys 0m5.641s 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:29.860 ************************************ 00:06:29.860 END TEST nvmf_lvol 00:06:29.860 ************************************ 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.860 ************************************ 00:06:29.860 START TEST nvmf_lvs_grow 00:06:29.860 ************************************ 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:29.860 * Looking for test storage... 00:06:29.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:06:29.860 13:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.395 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.395 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:06:32.395 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:32.395 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:32.396 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:32.396 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:32.396 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:32.396 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:32.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:06:32.396 00:06:32.396 --- 10.0.0.2 ping statistics --- 00:06:32.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.396 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:06:32.396 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:06:32.396 00:06:32.396 --- 10.0.0.1 ping statistics --- 00:06:32.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.397 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=473077 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 473077 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 473077 ']' 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.397 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.397 [2024-07-25 13:35:29.245542] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:32.397 [2024-07-25 13:35:29.245620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.397 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.397 [2024-07-25 13:35:29.307261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.397 [2024-07-25 13:35:29.416846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.397 [2024-07-25 13:35:29.416919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.397 [2024-07-25 13:35:29.416932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.397 [2024-07-25 13:35:29.416943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.397 [2024-07-25 13:35:29.416952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.397 [2024-07-25 13:35:29.416979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.655 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.655 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:06:32.655 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:32.655 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.655 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.655 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.655 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:32.912 [2024-07-25 13:35:29.794653] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.912 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:32.912 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.912 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.912 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:32.912 ************************************ 00:06:32.912 START TEST lvs_grow_clean 00:06:32.912 ************************************ 00:06:32.912 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:06:32.912 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:32.912 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:32.912 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:32.913 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:32.913 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:32.913 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:32.913 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:32.913 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:32.913 13:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:33.170 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:33.170 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:33.428 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:33.428 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:33.428 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:33.686 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:33.686 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:33.686 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7be4a44e-1375-40e9-99e8-0a972daf522c lvol 150 00:06:33.945 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=32b2c63d-39eb-450a-ab88-f612b5655971 00:06:33.945 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:33.945 13:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:34.221 [2024-07-25 13:35:31.102261] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:34.221 [2024-07-25 13:35:31.102372] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:34.221 true 00:06:34.221 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:34.221 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:34.491 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:34.491 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:34.750 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32b2c63d-39eb-450a-ab88-f612b5655971 00:06:35.008 13:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:35.267 [2024-07-25 13:35:32.097407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.267 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:35.525 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=473520 00:06:35.525 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:35.525 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:35.525 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 473520 /var/tmp/bdevperf.sock 00:06:35.525 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 473520 ']' 00:06:35.525 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:35.525 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.525 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:35.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:35.526 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.526 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:35.526 [2024-07-25 13:35:32.395934] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:35.526 [2024-07-25 13:35:32.396005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473520 ] 00:06:35.526 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.526 [2024-07-25 13:35:32.452509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.526 [2024-07-25 13:35:32.558155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.783 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.783 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:06:35.783 13:35:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:36.041 Nvme0n1 00:06:36.041 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:36.298 [ 00:06:36.298 { 00:06:36.299 "name": "Nvme0n1", 00:06:36.299 "aliases": [ 00:06:36.299 "32b2c63d-39eb-450a-ab88-f612b5655971" 00:06:36.299 ], 00:06:36.299 "product_name": "NVMe disk", 00:06:36.299 "block_size": 4096, 00:06:36.299 "num_blocks": 38912, 00:06:36.299 "uuid": "32b2c63d-39eb-450a-ab88-f612b5655971", 00:06:36.299 "assigned_rate_limits": { 00:06:36.299 "rw_ios_per_sec": 0, 00:06:36.299 "rw_mbytes_per_sec": 0, 00:06:36.299 "r_mbytes_per_sec": 0, 00:06:36.299 "w_mbytes_per_sec": 0 00:06:36.299 }, 00:06:36.299 "claimed": false, 00:06:36.299 "zoned": false, 00:06:36.299 "supported_io_types": { 00:06:36.299 "read": true, 00:06:36.299 "write": true, 00:06:36.299 "unmap": true, 00:06:36.299 "flush": true, 00:06:36.299 "reset": true, 00:06:36.299 "nvme_admin": true, 00:06:36.299 "nvme_io": true, 00:06:36.299 "nvme_io_md": false, 00:06:36.299 "write_zeroes": true, 00:06:36.299 "zcopy": false, 00:06:36.299 "get_zone_info": false, 00:06:36.299 "zone_management": false, 00:06:36.299 "zone_append": false, 00:06:36.299 "compare": true, 00:06:36.299 "compare_and_write": true, 00:06:36.299 "abort": true, 00:06:36.299 "seek_hole": false, 00:06:36.299 "seek_data": false, 00:06:36.299 "copy": true, 00:06:36.299 "nvme_iov_md": false 00:06:36.299 }, 00:06:36.299 "memory_domains": [ 00:06:36.299 { 00:06:36.299 "dma_device_id": "system", 00:06:36.299 "dma_device_type": 1 00:06:36.299 } 00:06:36.299 ], 00:06:36.299 "driver_specific": { 00:06:36.299 "nvme": [ 00:06:36.299 { 00:06:36.299 "trid": { 00:06:36.299 "trtype": "TCP", 00:06:36.299 "adrfam": "IPv4", 00:06:36.299 "traddr": "10.0.0.2", 00:06:36.299 "trsvcid": "4420", 00:06:36.299 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:36.299 }, 00:06:36.299 "ctrlr_data": { 00:06:36.299 "cntlid": 1, 00:06:36.299 "vendor_id": "0x8086", 00:06:36.299 "model_number": "SPDK bdev Controller", 00:06:36.299 "serial_number": "SPDK0", 00:06:36.299 "firmware_revision": "24.09", 00:06:36.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:36.299 "oacs": { 00:06:36.299 "security": 0, 00:06:36.299 "format": 0, 00:06:36.299 "firmware": 0, 00:06:36.299 "ns_manage": 0 00:06:36.299 }, 00:06:36.299 "multi_ctrlr": true, 00:06:36.299 "ana_reporting": false 00:06:36.299 }, 00:06:36.299 "vs": { 00:06:36.299 "nvme_version": "1.3" 00:06:36.299 }, 00:06:36.299 "ns_data": { 00:06:36.299 "id": 1, 00:06:36.299 "can_share": true 00:06:36.299 } 00:06:36.299 } 00:06:36.299 ], 00:06:36.299 "mp_policy": "active_passive" 00:06:36.299 } 00:06:36.299 } 00:06:36.299 ] 00:06:36.299 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=473650 00:06:36.299 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:36.299 13:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:36.557 Running I/O for 10 seconds... 00:06:37.492 Latency(us) 00:06:37.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:37.492 Nvme0n1 : 1.00 15622.00 61.02 0.00 0.00 0.00 0.00 0.00 00:06:37.492 =================================================================================================================== 00:06:37.492 Total : 15622.00 61.02 0.00 0.00 0.00 0.00 0.00 00:06:37.492 00:06:38.430 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:38.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:38.430 Nvme0n1 : 2.00 15748.50 61.52 0.00 0.00 0.00 0.00 0.00 00:06:38.430 =================================================================================================================== 00:06:38.430 Total : 15748.50 61.52 0.00 0.00 0.00 0.00 0.00 00:06:38.430 00:06:38.687 true 00:06:38.687 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:38.687 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:38.944 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:38.944 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:38.944 13:35:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 473650 00:06:39.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:39.513 Nvme0n1 : 3.00 15844.33 61.89 0.00 0.00 0.00 0.00 0.00 00:06:39.513 =================================================================================================================== 00:06:39.513 Total : 15844.33 61.89 0.00 0.00 0.00 0.00 0.00 00:06:39.513 00:06:40.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:40.451 Nvme0n1 : 4.00 15947.25 62.29 0.00 0.00 0.00 0.00 0.00 00:06:40.451 =================================================================================================================== 00:06:40.451 Total : 15947.25 62.29 0.00 0.00 0.00 0.00 0.00 00:06:40.451 00:06:41.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:41.391 Nvme0n1 : 5.00 16009.00 62.54 0.00 0.00 0.00 0.00 0.00 00:06:41.391 =================================================================================================================== 00:06:41.391 Total : 16009.00 62.54 0.00 0.00 0.00 0.00 0.00 00:06:41.391 00:06:42.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:42.768 Nvme0n1 : 6.00 16071.33 62.78 0.00 0.00 0.00 0.00 0.00 00:06:42.768 =================================================================================================================== 00:06:42.768 Total : 16071.33 62.78 0.00 0.00 0.00 0.00 0.00 00:06:42.768 00:06:43.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:43.706 Nvme0n1 : 7.00 16107.00 62.92 0.00 0.00 0.00 0.00 0.00 00:06:43.706 =================================================================================================================== 00:06:43.706 Total : 16107.00 62.92 0.00 0.00 0.00 0.00 0.00 00:06:43.706 00:06:44.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:44.643 Nvme0n1 : 8.00 16133.75 63.02 0.00 0.00 0.00 0.00 0.00 00:06:44.643 =================================================================================================================== 00:06:44.643 Total : 16133.75 63.02 0.00 0.00 0.00 0.00 0.00 00:06:44.643 00:06:45.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.582 Nvme0n1 : 9.00 16158.33 63.12 0.00 0.00 0.00 0.00 0.00 00:06:45.582 =================================================================================================================== 00:06:45.582 Total : 16158.33 63.12 0.00 0.00 0.00 0.00 0.00 00:06:45.582 00:06:46.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.520 Nvme0n1 : 10.00 16180.80 63.21 0.00 0.00 0.00 0.00 0.00 00:06:46.521 =================================================================================================================== 00:06:46.521 Total : 16180.80 63.21 0.00 0.00 0.00 0.00 0.00 00:06:46.521 00:06:46.521 00:06:46.521 Latency(us) 00:06:46.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.521 Nvme0n1 : 10.01 16183.76 63.22 0.00 0.00 7904.74 2888.44 14757.74 00:06:46.521 =================================================================================================================== 00:06:46.521 Total : 16183.76 63.22 0.00 0.00 7904.74 2888.44 14757.74 00:06:46.521 0 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 473520 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 473520 ']' 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 473520 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 473520 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 473520' 00:06:46.521 killing process with pid 473520 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 473520 00:06:46.521 Received shutdown signal, test time was about 10.000000 seconds 00:06:46.521 00:06:46.521 Latency(us) 00:06:46.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.521 =================================================================================================================== 00:06:46.521 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:46.521 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 473520 00:06:46.781 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.039 13:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:47.298 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:47.298 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:47.558 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:47.558 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:47.558 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:47.818 [2024-07-25 13:35:44.731785] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:47.818 13:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:48.077 request: 00:06:48.077 { 00:06:48.077 "uuid": "7be4a44e-1375-40e9-99e8-0a972daf522c", 00:06:48.077 "method": "bdev_lvol_get_lvstores", 00:06:48.077 "req_id": 1 00:06:48.077 } 00:06:48.077 Got JSON-RPC error response 00:06:48.077 response: 00:06:48.077 { 00:06:48.077 "code": -19, 00:06:48.077 "message": "No such device" 00:06:48.077 } 00:06:48.077 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:06:48.077 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.077 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.077 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.077 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:48.334 aio_bdev 00:06:48.334 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 32b2c63d-39eb-450a-ab88-f612b5655971 00:06:48.334 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=32b2c63d-39eb-450a-ab88-f612b5655971 00:06:48.334 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:48.334 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:06:48.334 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:48.334 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:48.334 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:48.592 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32b2c63d-39eb-450a-ab88-f612b5655971 -t 2000 00:06:48.851 [ 00:06:48.851 { 00:06:48.851 "name": "32b2c63d-39eb-450a-ab88-f612b5655971", 00:06:48.851 "aliases": [ 00:06:48.851 "lvs/lvol" 00:06:48.851 ], 00:06:48.851 "product_name": "Logical Volume", 00:06:48.851 "block_size": 4096, 00:06:48.851 "num_blocks": 38912, 00:06:48.851 "uuid": "32b2c63d-39eb-450a-ab88-f612b5655971", 00:06:48.851 "assigned_rate_limits": { 00:06:48.851 "rw_ios_per_sec": 0, 00:06:48.851 "rw_mbytes_per_sec": 0, 00:06:48.851 "r_mbytes_per_sec": 0, 00:06:48.851 "w_mbytes_per_sec": 0 00:06:48.851 }, 00:06:48.851 "claimed": false, 00:06:48.851 "zoned": false, 00:06:48.851 "supported_io_types": { 00:06:48.851 "read": true, 00:06:48.851 "write": true, 00:06:48.851 "unmap": true, 00:06:48.851 "flush": false, 00:06:48.851 "reset": true, 00:06:48.851 "nvme_admin": false, 00:06:48.851 "nvme_io": false, 00:06:48.851 "nvme_io_md": false, 00:06:48.851 "write_zeroes": true, 00:06:48.851 "zcopy": false, 00:06:48.851 "get_zone_info": false, 00:06:48.851 "zone_management": false, 00:06:48.851 "zone_append": false, 00:06:48.851 "compare": false, 00:06:48.851 "compare_and_write": false, 00:06:48.851 "abort": false, 00:06:48.851 "seek_hole": true, 00:06:48.851 "seek_data": true, 00:06:48.851 "copy": false, 00:06:48.851 "nvme_iov_md": false 00:06:48.851 }, 00:06:48.851 "driver_specific": { 00:06:48.851 "lvol": { 00:06:48.851 "lvol_store_uuid": "7be4a44e-1375-40e9-99e8-0a972daf522c", 00:06:48.851 "base_bdev": "aio_bdev", 00:06:48.851 "thin_provision": false, 00:06:48.851 "num_allocated_clusters": 38, 00:06:48.851 "snapshot": false, 00:06:48.851 "clone": false, 00:06:48.851 "esnap_clone": false 00:06:48.851 } 00:06:48.851 } 00:06:48.851 } 00:06:48.851 ] 00:06:48.851 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:06:48.852 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:48.852 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:49.111 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:49.111 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:49.111 13:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:49.370 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:49.370 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 32b2c63d-39eb-450a-ab88-f612b5655971 00:06:49.629 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7be4a44e-1375-40e9-99e8-0a972daf522c 00:06:49.888 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:50.147 13:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:50.147 00:06:50.147 real 0m17.171s 00:06:50.147 user 0m16.432s 00:06:50.147 sys 0m1.988s 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:50.147 ************************************ 00:06:50.147 END TEST lvs_grow_clean 00:06:50.147 ************************************ 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:50.147 ************************************ 00:06:50.147 START TEST lvs_grow_dirty 00:06:50.147 ************************************ 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:50.147 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:50.405 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:50.405 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:50.664 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a67cb357-a37d-4414-a127-dc1bdb2e4448 00:06:50.664 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:06:50.664 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:50.922 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:50.922 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:50.922 13:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a67cb357-a37d-4414-a127-dc1bdb2e4448 lvol 150 00:06:51.181 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c2e078a8-9671-4400-bb59-90547224da26 00:06:51.181 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:51.181 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:51.441 [2024-07-25 13:35:48.406282] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:51.441 [2024-07-25 13:35:48.406396] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:51.441 true 00:06:51.441 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:06:51.441 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:51.701 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:51.701 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:51.959 13:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c2e078a8-9671-4400-bb59-90547224da26 00:06:52.218 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:52.478 [2024-07-25 13:35:49.381284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.478 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=475578 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 475578 /var/tmp/bdevperf.sock 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 475578 ']' 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:52.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.736 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:52.736 [2024-07-25 13:35:49.679230] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:52.736 [2024-07-25 13:35:49.679304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475578 ] 00:06:52.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.736 [2024-07-25 13:35:49.736487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.994 [2024-07-25 13:35:49.842527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.994 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.994 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:06:52.994 13:35:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:53.564 Nvme0n1 00:06:53.564 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:53.822 [ 00:06:53.822 { 00:06:53.822 "name": "Nvme0n1", 00:06:53.822 "aliases": [ 00:06:53.822 "c2e078a8-9671-4400-bb59-90547224da26" 00:06:53.822 ], 00:06:53.822 "product_name": "NVMe disk", 00:06:53.822 "block_size": 4096, 00:06:53.822 "num_blocks": 38912, 00:06:53.822 "uuid": "c2e078a8-9671-4400-bb59-90547224da26", 00:06:53.822 "assigned_rate_limits": { 00:06:53.822 "rw_ios_per_sec": 0, 00:06:53.822 "rw_mbytes_per_sec": 0, 00:06:53.822 "r_mbytes_per_sec": 0, 00:06:53.822 "w_mbytes_per_sec": 0 00:06:53.822 }, 00:06:53.822 "claimed": false, 00:06:53.822 "zoned": false, 00:06:53.822 "supported_io_types": { 00:06:53.822 "read": true, 00:06:53.822 "write": true, 00:06:53.822 "unmap": true, 00:06:53.822 "flush": true, 00:06:53.822 "reset": true, 00:06:53.822 "nvme_admin": true, 00:06:53.822 "nvme_io": true, 00:06:53.822 "nvme_io_md": false, 00:06:53.822 "write_zeroes": true, 00:06:53.822 "zcopy": false, 00:06:53.822 "get_zone_info": false, 00:06:53.822 "zone_management": false, 00:06:53.822 "zone_append": false, 00:06:53.822 "compare": true, 00:06:53.822 "compare_and_write": true, 00:06:53.822 "abort": true, 00:06:53.822 "seek_hole": false, 00:06:53.822 "seek_data": false, 00:06:53.822 "copy": true, 00:06:53.822 "nvme_iov_md": false 00:06:53.822 }, 00:06:53.822 "memory_domains": [ 00:06:53.822 { 00:06:53.822 "dma_device_id": "system", 00:06:53.822 "dma_device_type": 1 00:06:53.822 } 00:06:53.822 ], 00:06:53.822 "driver_specific": { 00:06:53.822 "nvme": [ 00:06:53.822 { 00:06:53.822 "trid": { 00:06:53.822 "trtype": "TCP", 00:06:53.822 "adrfam": "IPv4", 00:06:53.822 "traddr": "10.0.0.2", 00:06:53.822 "trsvcid": "4420", 00:06:53.822 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:53.822 }, 00:06:53.822 "ctrlr_data": { 00:06:53.822 "cntlid": 1, 00:06:53.822 "vendor_id": "0x8086", 00:06:53.822 "model_number": "SPDK bdev Controller", 00:06:53.822 "serial_number": "SPDK0", 00:06:53.822 "firmware_revision": "24.09", 00:06:53.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:53.822 "oacs": { 00:06:53.822 "security": 0, 00:06:53.822 "format": 0, 00:06:53.822 "firmware": 0, 00:06:53.822 "ns_manage": 0 00:06:53.822 }, 00:06:53.822 "multi_ctrlr": true, 00:06:53.822 "ana_reporting": false 00:06:53.822 }, 00:06:53.822 "vs": { 00:06:53.822 "nvme_version": "1.3" 00:06:53.822 }, 00:06:53.822 "ns_data": { 00:06:53.822 "id": 1, 00:06:53.822 "can_share": true 00:06:53.822 } 00:06:53.822 } 00:06:53.822 ], 00:06:53.822 "mp_policy": "active_passive" 00:06:53.822 } 00:06:53.822 } 00:06:53.822 ] 00:06:53.822 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=475716 00:06:53.822 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:53.822 13:35:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:53.822 Running I/O for 10 seconds... 00:06:54.761 Latency(us) 00:06:54.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.761 Nvme0n1 : 1.00 16260.00 63.52 0.00 0.00 0.00 0.00 0.00 00:06:54.761 =================================================================================================================== 00:06:54.761 Total : 16260.00 63.52 0.00 0.00 0.00 0.00 0.00 00:06:54.761 00:06:55.694 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:06:55.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.952 Nvme0n1 : 2.00 16388.00 64.02 0.00 0.00 0.00 0.00 0.00 00:06:55.952 =================================================================================================================== 00:06:55.952 Total : 16388.00 64.02 0.00 0.00 0.00 0.00 0.00 00:06:55.952 00:06:55.952 true 00:06:55.952 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:06:55.952 13:35:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:56.210 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:56.210 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:56.210 13:35:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 475716 00:06:56.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.776 Nvme0n1 : 3.00 16472.00 64.34 0.00 0.00 0.00 0.00 0.00 00:06:56.776 =================================================================================================================== 00:06:56.776 Total : 16472.00 64.34 0.00 0.00 0.00 0.00 0.00 00:06:56.776 00:06:57.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.749 Nvme0n1 : 4.00 16551.50 64.65 0.00 0.00 0.00 0.00 0.00 00:06:57.749 =================================================================================================================== 00:06:57.749 Total : 16551.50 64.65 0.00 0.00 0.00 0.00 0.00 00:06:57.749 00:06:59.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.127 Nvme0n1 : 5.00 16607.40 64.87 0.00 0.00 0.00 0.00 0.00 00:06:59.127 =================================================================================================================== 00:06:59.127 Total : 16607.40 64.87 0.00 0.00 0.00 0.00 0.00 00:06:59.127 00:07:00.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.063 Nvme0n1 : 6.00 16644.17 65.02 0.00 0.00 0.00 0.00 0.00 00:07:00.064 =================================================================================================================== 00:07:00.064 Total : 16644.17 65.02 0.00 0.00 0.00 0.00 0.00 00:07:00.064 00:07:01.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.001 Nvme0n1 : 7.00 16680.57 65.16 0.00 0.00 0.00 0.00 0.00 00:07:01.001 =================================================================================================================== 00:07:01.001 Total : 16680.57 65.16 0.00 0.00 0.00 0.00 0.00 00:07:01.001 00:07:01.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.938 Nvme0n1 : 8.00 16723.50 65.33 0.00 0.00 0.00 0.00 0.00 00:07:01.938 =================================================================================================================== 00:07:01.938 Total : 16723.50 65.33 0.00 0.00 0.00 0.00 0.00 00:07:01.938 00:07:02.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.875 Nvme0n1 : 9.00 16753.33 65.44 0.00 0.00 0.00 0.00 0.00 00:07:02.875 =================================================================================================================== 00:07:02.875 Total : 16753.33 65.44 0.00 0.00 0.00 0.00 0.00 00:07:02.875 00:07:03.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.810 Nvme0n1 : 10.00 16768.00 65.50 0.00 0.00 0.00 0.00 0.00 00:07:03.810 =================================================================================================================== 00:07:03.810 Total : 16768.00 65.50 0.00 0.00 0.00 0.00 0.00 00:07:03.810 00:07:03.810 00:07:03.810 Latency(us) 00:07:03.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.810 Nvme0n1 : 10.01 16768.85 65.50 0.00 0.00 7628.58 2196.67 14854.83 00:07:03.810 =================================================================================================================== 00:07:03.810 Total : 16768.85 65.50 0.00 0.00 7628.58 2196.67 14854.83 00:07:03.810 0 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 475578 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 475578 ']' 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 475578 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475578 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475578' 00:07:03.810 killing process with pid 475578 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 475578 00:07:03.810 Received shutdown signal, test time was about 10.000000 seconds 00:07:03.810 00:07:03.810 Latency(us) 00:07:03.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.810 =================================================================================================================== 00:07:03.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:03.810 13:36:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 475578 00:07:04.068 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:04.325 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:04.891 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:07:04.891 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:04.891 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:04.891 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:04.891 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 473077 00:07:04.891 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 473077 00:07:04.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 473077 Killed "${NVMF_APP[@]}" "$@" 00:07:04.891 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:04.891 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:04.891 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=477168 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 477168 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 477168 ']' 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.151 13:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:05.151 [2024-07-25 13:36:01.978513] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:05.151 [2024-07-25 13:36:01.978590] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.151 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.151 [2024-07-25 13:36:02.043367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.151 [2024-07-25 13:36:02.145778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.151 [2024-07-25 13:36:02.145845] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.151 [2024-07-25 13:36:02.145860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.151 [2024-07-25 13:36:02.145870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.151 [2024-07-25 13:36:02.145880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.151 [2024-07-25 13:36:02.145904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.410 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.410 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:05.410 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:05.410 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.410 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:05.410 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.410 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:05.668 [2024-07-25 13:36:02.543490] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:05.668 [2024-07-25 13:36:02.543629] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:05.668 [2024-07-25 13:36:02.543675] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:05.668 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:05.668 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c2e078a8-9671-4400-bb59-90547224da26 00:07:05.668 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c2e078a8-9671-4400-bb59-90547224da26 00:07:05.668 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:05.668 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:05.668 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:05.668 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:05.668 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:05.926 13:36:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c2e078a8-9671-4400-bb59-90547224da26 -t 2000 00:07:06.185 [ 00:07:06.185 { 00:07:06.185 "name": "c2e078a8-9671-4400-bb59-90547224da26", 00:07:06.185 "aliases": [ 00:07:06.185 "lvs/lvol" 00:07:06.185 ], 00:07:06.185 "product_name": "Logical Volume", 00:07:06.185 "block_size": 4096, 00:07:06.185 "num_blocks": 38912, 00:07:06.185 "uuid": "c2e078a8-9671-4400-bb59-90547224da26", 00:07:06.185 "assigned_rate_limits": { 00:07:06.185 "rw_ios_per_sec": 0, 00:07:06.185 "rw_mbytes_per_sec": 0, 00:07:06.185 "r_mbytes_per_sec": 0, 00:07:06.185 "w_mbytes_per_sec": 0 00:07:06.185 }, 00:07:06.185 "claimed": false, 00:07:06.185 "zoned": false, 00:07:06.185 "supported_io_types": { 00:07:06.185 "read": true, 00:07:06.185 "write": true, 00:07:06.185 "unmap": true, 00:07:06.185 "flush": false, 00:07:06.185 "reset": true, 00:07:06.185 "nvme_admin": false, 00:07:06.185 "nvme_io": false, 00:07:06.185 "nvme_io_md": false, 00:07:06.185 "write_zeroes": true, 00:07:06.185 "zcopy": false, 00:07:06.185 "get_zone_info": false, 00:07:06.185 "zone_management": false, 00:07:06.185 "zone_append": false, 00:07:06.185 "compare": false, 00:07:06.185 "compare_and_write": false, 00:07:06.185 "abort": false, 00:07:06.185 "seek_hole": true, 00:07:06.185 "seek_data": true, 00:07:06.185 "copy": false, 00:07:06.185 "nvme_iov_md": false 00:07:06.185 }, 00:07:06.185 "driver_specific": { 00:07:06.185 "lvol": { 00:07:06.185 "lvol_store_uuid": "a67cb357-a37d-4414-a127-dc1bdb2e4448", 00:07:06.185 "base_bdev": "aio_bdev", 00:07:06.185 "thin_provision": false, 00:07:06.185 "num_allocated_clusters": 38, 00:07:06.185 "snapshot": false, 00:07:06.185 "clone": false, 00:07:06.185 "esnap_clone": false 00:07:06.185 } 00:07:06.185 } 00:07:06.185 } 00:07:06.185 ] 00:07:06.185 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:06.185 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:07:06.185 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:06.444 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:06.444 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:07:06.444 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:06.701 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:06.701 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:06.959 [2024-07-25 13:36:03.788492] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:06.959 13:36:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:07:07.217 request: 00:07:07.217 { 00:07:07.217 "uuid": "a67cb357-a37d-4414-a127-dc1bdb2e4448", 00:07:07.217 "method": "bdev_lvol_get_lvstores", 00:07:07.217 "req_id": 1 00:07:07.217 } 00:07:07.217 Got JSON-RPC error response 00:07:07.217 response: 00:07:07.217 { 00:07:07.217 "code": -19, 00:07:07.217 "message": "No such device" 00:07:07.217 } 00:07:07.217 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:07.217 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.217 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.217 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.217 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:07.475 aio_bdev 00:07:07.475 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c2e078a8-9671-4400-bb59-90547224da26 00:07:07.475 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c2e078a8-9671-4400-bb59-90547224da26 00:07:07.475 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:07.475 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:07.475 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:07.475 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:07.475 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:07.734 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c2e078a8-9671-4400-bb59-90547224da26 -t 2000 00:07:07.992 [ 00:07:07.992 { 00:07:07.992 "name": "c2e078a8-9671-4400-bb59-90547224da26", 00:07:07.992 "aliases": [ 00:07:07.992 "lvs/lvol" 00:07:07.992 ], 00:07:07.992 "product_name": "Logical Volume", 00:07:07.992 "block_size": 4096, 00:07:07.992 "num_blocks": 38912, 00:07:07.992 "uuid": "c2e078a8-9671-4400-bb59-90547224da26", 00:07:07.992 "assigned_rate_limits": { 00:07:07.992 "rw_ios_per_sec": 0, 00:07:07.992 "rw_mbytes_per_sec": 0, 00:07:07.992 "r_mbytes_per_sec": 0, 00:07:07.992 "w_mbytes_per_sec": 0 00:07:07.992 }, 00:07:07.992 "claimed": false, 00:07:07.992 "zoned": false, 00:07:07.992 "supported_io_types": { 00:07:07.992 "read": true, 00:07:07.992 "write": true, 00:07:07.992 "unmap": true, 00:07:07.992 "flush": false, 00:07:07.992 "reset": true, 00:07:07.992 "nvme_admin": false, 00:07:07.992 "nvme_io": false, 00:07:07.992 "nvme_io_md": false, 00:07:07.992 "write_zeroes": true, 00:07:07.992 "zcopy": false, 00:07:07.992 "get_zone_info": false, 00:07:07.992 "zone_management": false, 00:07:07.992 "zone_append": false, 00:07:07.992 "compare": false, 00:07:07.992 "compare_and_write": false, 00:07:07.992 "abort": false, 00:07:07.992 "seek_hole": true, 00:07:07.992 "seek_data": true, 00:07:07.992 "copy": false, 00:07:07.992 "nvme_iov_md": false 00:07:07.992 }, 00:07:07.992 "driver_specific": { 00:07:07.992 "lvol": { 00:07:07.992 "lvol_store_uuid": "a67cb357-a37d-4414-a127-dc1bdb2e4448", 00:07:07.992 "base_bdev": "aio_bdev", 00:07:07.992 "thin_provision": false, 00:07:07.992 "num_allocated_clusters": 38, 00:07:07.992 "snapshot": false, 00:07:07.992 "clone": false, 00:07:07.992 "esnap_clone": false 00:07:07.992 } 00:07:07.992 } 00:07:07.992 } 00:07:07.992 ] 00:07:07.992 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:07.992 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:07:07.992 13:36:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:08.251 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:08.251 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:07:08.251 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:08.510 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:08.510 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c2e078a8-9671-4400-bb59-90547224da26 00:07:08.770 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a67cb357-a37d-4414-a127-dc1bdb2e4448 00:07:09.028 13:36:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.287 00:07:09.287 real 0m19.142s 00:07:09.287 user 0m47.309s 00:07:09.287 sys 0m5.014s 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:09.287 ************************************ 00:07:09.287 END TEST lvs_grow_dirty 00:07:09.287 ************************************ 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:09.287 nvmf_trace.0 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:09.287 rmmod nvme_tcp 00:07:09.287 rmmod nvme_fabrics 00:07:09.287 rmmod nvme_keyring 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 477168 ']' 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 477168 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 477168 ']' 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 477168 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.287 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 477168 00:07:09.546 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.546 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.546 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 477168' 00:07:09.546 killing process with pid 477168 00:07:09.546 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 477168 00:07:09.546 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 477168 00:07:09.806 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:09.806 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:09.806 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:09.806 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:09.806 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:09.806 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.806 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.806 13:36:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.708 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:11.708 00:07:11.708 real 0m41.844s 00:07:11.708 user 1m9.535s 00:07:11.708 sys 0m8.959s 00:07:11.708 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.708 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.708 ************************************ 00:07:11.708 END TEST nvmf_lvs_grow 00:07:11.708 ************************************ 00:07:11.708 13:36:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:11.708 13:36:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:11.708 13:36:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.708 13:36:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.708 ************************************ 00:07:11.708 START TEST nvmf_bdev_io_wait 00:07:11.708 ************************************ 00:07:11.708 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:11.708 * Looking for test storage... 00:07:11.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.966 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.967 13:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:13.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:13.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:13.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:13.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.870 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.871 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.871 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.871 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:14.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:07:14.129 00:07:14.129 --- 10.0.0.2 ping statistics --- 00:07:14.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.129 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:14.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:07:14.129 00:07:14.129 --- 10.0.0.1 ping statistics --- 00:07:14.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.129 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.129 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=480198 00:07:14.130 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:14.130 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 480198 00:07:14.130 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 480198 ']' 00:07:14.130 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.130 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.130 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.130 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.130 13:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.130 [2024-07-25 13:36:11.005508] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:14.130 [2024-07-25 13:36:11.005574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.130 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.130 [2024-07-25 13:36:11.071535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.388 [2024-07-25 13:36:11.180663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.388 [2024-07-25 13:36:11.180708] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.388 [2024-07-25 13:36:11.180729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.388 [2024-07-25 13:36:11.180740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.388 [2024-07-25 13:36:11.180753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.388 [2024-07-25 13:36:11.180836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.388 [2024-07-25 13:36:11.180897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.388 [2024-07-25 13:36:11.180919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.388 [2024-07-25 13:36:11.180922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.388 [2024-07-25 13:36:11.322466] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.388 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.389 Malloc0 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:14.389 [2024-07-25 13:36:11.384414] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=480225 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=480226 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=480229 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:14.389 { 00:07:14.389 "params": { 00:07:14.389 "name": "Nvme$subsystem", 00:07:14.389 "trtype": "$TEST_TRANSPORT", 00:07:14.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:14.389 "adrfam": "ipv4", 00:07:14.389 "trsvcid": "$NVMF_PORT", 00:07:14.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:14.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:14.389 "hdgst": ${hdgst:-false}, 00:07:14.389 "ddgst": ${ddgst:-false} 00:07:14.389 }, 00:07:14.389 "method": "bdev_nvme_attach_controller" 00:07:14.389 } 00:07:14.389 EOF 00:07:14.389 )") 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=480231 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:14.389 { 00:07:14.389 "params": { 00:07:14.389 "name": "Nvme$subsystem", 00:07:14.389 "trtype": "$TEST_TRANSPORT", 00:07:14.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:14.389 "adrfam": "ipv4", 00:07:14.389 "trsvcid": "$NVMF_PORT", 00:07:14.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:14.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:14.389 "hdgst": ${hdgst:-false}, 00:07:14.389 "ddgst": ${ddgst:-false} 00:07:14.389 }, 00:07:14.389 "method": "bdev_nvme_attach_controller" 00:07:14.389 } 00:07:14.389 EOF 00:07:14.389 )") 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:14.389 { 00:07:14.389 "params": { 00:07:14.389 "name": "Nvme$subsystem", 00:07:14.389 "trtype": "$TEST_TRANSPORT", 00:07:14.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:14.389 "adrfam": "ipv4", 00:07:14.389 "trsvcid": "$NVMF_PORT", 00:07:14.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:14.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:14.389 "hdgst": ${hdgst:-false}, 00:07:14.389 "ddgst": ${ddgst:-false} 00:07:14.389 }, 00:07:14.389 "method": "bdev_nvme_attach_controller" 00:07:14.389 } 00:07:14.389 EOF 00:07:14.389 )") 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:14.389 { 00:07:14.389 "params": { 00:07:14.389 "name": "Nvme$subsystem", 00:07:14.389 "trtype": "$TEST_TRANSPORT", 00:07:14.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:14.389 "adrfam": "ipv4", 00:07:14.389 "trsvcid": "$NVMF_PORT", 00:07:14.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:14.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:14.389 "hdgst": ${hdgst:-false}, 00:07:14.389 "ddgst": ${ddgst:-false} 00:07:14.389 }, 00:07:14.389 "method": "bdev_nvme_attach_controller" 00:07:14.389 } 00:07:14.389 EOF 00:07:14.389 )") 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 480225 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:14.389 "params": { 00:07:14.389 "name": "Nvme1", 00:07:14.389 "trtype": "tcp", 00:07:14.389 "traddr": "10.0.0.2", 00:07:14.389 "adrfam": "ipv4", 00:07:14.389 "trsvcid": "4420", 00:07:14.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:14.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:14.389 "hdgst": false, 00:07:14.389 "ddgst": false 00:07:14.389 }, 00:07:14.389 "method": "bdev_nvme_attach_controller" 00:07:14.389 }' 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:14.389 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:14.389 "params": { 00:07:14.389 "name": "Nvme1", 00:07:14.389 "trtype": "tcp", 00:07:14.389 "traddr": "10.0.0.2", 00:07:14.389 "adrfam": "ipv4", 00:07:14.389 "trsvcid": "4420", 00:07:14.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:14.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:14.389 "hdgst": false, 00:07:14.389 "ddgst": false 00:07:14.389 }, 00:07:14.389 "method": "bdev_nvme_attach_controller" 00:07:14.389 }' 00:07:14.390 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:14.390 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:14.390 "params": { 00:07:14.390 "name": "Nvme1", 00:07:14.390 "trtype": "tcp", 00:07:14.390 "traddr": "10.0.0.2", 00:07:14.390 "adrfam": "ipv4", 00:07:14.390 "trsvcid": "4420", 00:07:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:14.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:14.390 "hdgst": false, 00:07:14.390 "ddgst": false 00:07:14.390 }, 00:07:14.390 "method": "bdev_nvme_attach_controller" 00:07:14.390 }' 00:07:14.390 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:14.390 13:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:14.390 "params": { 00:07:14.390 "name": "Nvme1", 00:07:14.390 "trtype": "tcp", 00:07:14.390 "traddr": "10.0.0.2", 00:07:14.390 "adrfam": "ipv4", 00:07:14.390 "trsvcid": "4420", 00:07:14.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:14.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:14.390 "hdgst": false, 00:07:14.390 "ddgst": false 00:07:14.390 }, 00:07:14.390 "method": "bdev_nvme_attach_controller" 00:07:14.390 }' 00:07:14.648 [2024-07-25 13:36:11.433828] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:14.648 [2024-07-25 13:36:11.433903] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:14.648 [2024-07-25 13:36:11.434501] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:14.648 [2024-07-25 13:36:11.434501] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:14.648 [2024-07-25 13:36:11.434502] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:14.648 [2024-07-25 13:36:11.434580] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 13:36:11.434580] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 13:36:11.434581] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:14.648 --proc-type=auto ] 00:07:14.648 --proc-type=auto ] 00:07:14.648 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.648 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.648 [2024-07-25 13:36:11.587106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.648 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.648 [2024-07-25 13:36:11.677577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:07:14.907 [2024-07-25 13:36:11.690686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.907 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.907 [2024-07-25 13:36:11.787413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:07:14.907 [2024-07-25 13:36:11.790261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.907 [2024-07-25 13:36:11.890305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:07:14.907 [2024-07-25 13:36:11.894561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.167 [2024-07-25 13:36:11.994123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.167 Running I/O for 1 seconds... 00:07:15.167 Running I/O for 1 seconds... 00:07:15.167 Running I/O for 1 seconds... 00:07:15.427 Running I/O for 1 seconds... 00:07:16.361 00:07:16.361 Latency(us) 00:07:16.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.361 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:16.361 Nvme1n1 : 1.01 11590.16 45.27 0.00 0.00 11001.09 6553.60 17476.27 00:07:16.361 =================================================================================================================== 00:07:16.361 Total : 11590.16 45.27 0.00 0.00 11001.09 6553.60 17476.27 00:07:16.361 00:07:16.361 Latency(us) 00:07:16.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.361 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:16.361 Nvme1n1 : 1.02 5154.54 20.13 0.00 0.00 24538.45 8107.05 34369.99 00:07:16.362 =================================================================================================================== 00:07:16.362 Total : 5154.54 20.13 0.00 0.00 24538.45 8107.05 34369.99 00:07:16.362 00:07:16.362 Latency(us) 00:07:16.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.362 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:16.362 Nvme1n1 : 1.00 187940.24 734.14 0.00 0.00 678.37 268.52 983.04 00:07:16.362 =================================================================================================================== 00:07:16.362 Total : 187940.24 734.14 0.00 0.00 678.37 268.52 983.04 00:07:16.362 00:07:16.362 Latency(us) 00:07:16.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.362 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:16.362 Nvme1n1 : 1.01 5196.10 20.30 0.00 0.00 24515.10 8738.13 52040.44 00:07:16.362 =================================================================================================================== 00:07:16.362 Total : 5196.10 20.30 0.00 0.00 24515.10 8738.13 52040.44 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 480226 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 480229 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 480231 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:16.619 rmmod nvme_tcp 00:07:16.619 rmmod nvme_fabrics 00:07:16.619 rmmod nvme_keyring 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 480198 ']' 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 480198 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 480198 ']' 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 480198 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 480198 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 480198' 00:07:16.619 killing process with pid 480198 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 480198 00:07:16.619 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 480198 00:07:16.877 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:16.877 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:16.877 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:16.877 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:16.877 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:16.877 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.877 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.877 13:36:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.409 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:19.409 00:07:19.409 real 0m7.247s 00:07:19.409 user 0m16.688s 00:07:19.409 sys 0m3.596s 00:07:19.409 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.409 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:19.409 ************************************ 00:07:19.409 END TEST nvmf_bdev_io_wait 00:07:19.409 ************************************ 00:07:19.409 13:36:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:19.409 13:36:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:19.409 13:36:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.409 13:36:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:19.409 ************************************ 00:07:19.409 START TEST nvmf_queue_depth 00:07:19.409 ************************************ 00:07:19.409 13:36:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:19.409 * Looking for test storage... 00:07:19.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.409 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.409 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:19.409 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.409 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.410 13:36:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:21.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:21.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:21.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:21.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.313 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:21.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:07:21.314 00:07:21.314 --- 10.0.0.2 ping statistics --- 00:07:21.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.314 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:07:21.314 00:07:21.314 --- 10.0.0.1 ping statistics --- 00:07:21.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.314 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.314 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=482467 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 482467 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 482467 ']' 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.574 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.574 [2024-07-25 13:36:18.410927] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:21.574 [2024-07-25 13:36:18.411000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.574 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.574 [2024-07-25 13:36:18.476111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.574 [2024-07-25 13:36:18.582322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.574 [2024-07-25 13:36:18.582390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.574 [2024-07-25 13:36:18.582410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.574 [2024-07-25 13:36:18.582437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.574 [2024-07-25 13:36:18.582447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.574 [2024-07-25 13:36:18.582474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.834 [2024-07-25 13:36:18.730329] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.834 Malloc0 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.834 [2024-07-25 13:36:18.798854] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=482596 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 482596 /var/tmp/bdevperf.sock 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 482596 ']' 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:21.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.834 13:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:21.834 [2024-07-25 13:36:18.842120] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:21.834 [2024-07-25 13:36:18.842198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482596 ] 00:07:22.110 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.110 [2024-07-25 13:36:18.901769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.110 [2024-07-25 13:36:19.006446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.110 13:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.110 13:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:22.110 13:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:22.110 13:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.110 13:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:22.381 NVMe0n1 00:07:22.381 13:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.381 13:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:22.381 Running I/O for 10 seconds... 00:07:34.592 00:07:34.592 Latency(us) 00:07:34.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.592 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:34.592 Verification LBA range: start 0x0 length 0x4000 00:07:34.592 NVMe0n1 : 10.07 8967.63 35.03 0.00 0.00 113677.52 13010.11 72235.24 00:07:34.592 =================================================================================================================== 00:07:34.592 Total : 8967.63 35.03 0.00 0.00 113677.52 13010.11 72235.24 00:07:34.592 0 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 482596 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 482596 ']' 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 482596 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482596 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482596' 00:07:34.592 killing process with pid 482596 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 482596 00:07:34.592 Received shutdown signal, test time was about 10.000000 seconds 00:07:34.592 00:07:34.592 Latency(us) 00:07:34.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.592 =================================================================================================================== 00:07:34.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 482596 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.592 rmmod nvme_tcp 00:07:34.592 rmmod nvme_fabrics 00:07:34.592 rmmod nvme_keyring 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 482467 ']' 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 482467 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 482467 ']' 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 482467 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482467 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482467' 00:07:34.592 killing process with pid 482467 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 482467 00:07:34.592 13:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 482467 00:07:34.592 13:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.592 13:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.592 13:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.592 13:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.592 13:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.592 13:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.592 13:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.592 13:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.162 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.162 00:07:35.162 real 0m16.154s 00:07:35.162 user 0m22.547s 00:07:35.162 sys 0m3.135s 00:07:35.162 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.162 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.162 ************************************ 00:07:35.162 END TEST nvmf_queue_depth 00:07:35.162 ************************************ 00:07:35.162 13:36:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:35.162 13:36:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.162 13:36:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.162 13:36:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.162 ************************************ 00:07:35.162 START TEST nvmf_target_multipath 00:07:35.162 ************************************ 00:07:35.162 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:35.421 * Looking for test storage... 00:07:35.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.421 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.422 13:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:37.327 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:37.327 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:37.327 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:37.327 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:37.328 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.328 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:07:37.588 00:07:37.588 --- 10.0.0.2 ping statistics --- 00:07:37.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.588 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:07:37.588 00:07:37.588 --- 10.0.0.1 ping statistics --- 00:07:37.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.588 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:37.588 only one NIC for nvmf test 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.588 rmmod nvme_tcp 00:07:37.588 rmmod nvme_fabrics 00:07:37.588 rmmod nvme_keyring 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.588 13:36:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.490 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:07:39.491 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.750 00:07:39.750 real 0m4.344s 00:07:39.750 user 0m0.823s 00:07:39.750 sys 0m1.512s 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:39.750 ************************************ 00:07:39.750 END TEST nvmf_target_multipath 00:07:39.750 ************************************ 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.750 ************************************ 00:07:39.750 START TEST nvmf_zcopy 00:07:39.750 ************************************ 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:39.750 * Looking for test storage... 00:07:39.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.750 13:36:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:42.287 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:42.287 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:42.287 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.287 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:42.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:07:42.288 00:07:42.288 --- 10.0.0.2 ping statistics --- 00:07:42.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.288 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:07:42.288 00:07:42.288 --- 10.0.0.1 ping statistics --- 00:07:42.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.288 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=487676 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 487676 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 487676 ']' 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.288 13:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.288 [2024-07-25 13:36:38.924159] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:42.288 [2024-07-25 13:36:38.924246] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.288 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.288 [2024-07-25 13:36:38.989139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.288 [2024-07-25 13:36:39.088656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.288 [2024-07-25 13:36:39.088727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.288 [2024-07-25 13:36:39.088767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.288 [2024-07-25 13:36:39.088778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.288 [2024-07-25 13:36:39.088786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.288 [2024-07-25 13:36:39.088827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.288 [2024-07-25 13:36:39.237032] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.288 [2024-07-25 13:36:39.253285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.288 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.289 malloc0 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:42.289 { 00:07:42.289 "params": { 00:07:42.289 "name": "Nvme$subsystem", 00:07:42.289 "trtype": "$TEST_TRANSPORT", 00:07:42.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:42.289 "adrfam": "ipv4", 00:07:42.289 "trsvcid": "$NVMF_PORT", 00:07:42.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:42.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:42.289 "hdgst": ${hdgst:-false}, 00:07:42.289 "ddgst": ${ddgst:-false} 00:07:42.289 }, 00:07:42.289 "method": "bdev_nvme_attach_controller" 00:07:42.289 } 00:07:42.289 EOF 00:07:42.289 )") 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:07:42.289 13:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:42.289 "params": { 00:07:42.289 "name": "Nvme1", 00:07:42.289 "trtype": "tcp", 00:07:42.289 "traddr": "10.0.0.2", 00:07:42.289 "adrfam": "ipv4", 00:07:42.289 "trsvcid": "4420", 00:07:42.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:42.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:42.289 "hdgst": false, 00:07:42.289 "ddgst": false 00:07:42.289 }, 00:07:42.289 "method": "bdev_nvme_attach_controller" 00:07:42.289 }' 00:07:42.547 [2024-07-25 13:36:39.347866] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:42.547 [2024-07-25 13:36:39.347944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487807 ] 00:07:42.547 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.548 [2024-07-25 13:36:39.413961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.548 [2024-07-25 13:36:39.520730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.807 Running I/O for 10 seconds... 00:07:52.783 00:07:52.783 Latency(us) 00:07:52.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.783 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:07:52.783 Verification LBA range: start 0x0 length 0x1000 00:07:52.783 Nvme1n1 : 10.01 5927.59 46.31 0.00 0.00 21535.60 3543.80 30874.74 00:07:52.783 =================================================================================================================== 00:07:52.783 Total : 5927.59 46.31 0.00 0.00 21535.60 3543.80 30874.74 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=489005 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:53.041 { 00:07:53.041 "params": { 00:07:53.041 "name": "Nvme$subsystem", 00:07:53.041 "trtype": "$TEST_TRANSPORT", 00:07:53.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.041 "adrfam": "ipv4", 00:07:53.041 "trsvcid": "$NVMF_PORT", 00:07:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.041 "hdgst": ${hdgst:-false}, 00:07:53.041 "ddgst": ${ddgst:-false} 00:07:53.041 }, 00:07:53.041 "method": "bdev_nvme_attach_controller" 00:07:53.041 } 00:07:53.041 EOF 00:07:53.041 )") 00:07:53.041 [2024-07-25 13:36:50.007378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.041 [2024-07-25 13:36:50.007438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:07:53.041 13:36:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:53.041 "params": { 00:07:53.041 "name": "Nvme1", 00:07:53.041 "trtype": "tcp", 00:07:53.041 "traddr": "10.0.0.2", 00:07:53.041 "adrfam": "ipv4", 00:07:53.041 "trsvcid": "4420", 00:07:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.041 "hdgst": false, 00:07:53.041 "ddgst": false 00:07:53.041 }, 00:07:53.041 "method": "bdev_nvme_attach_controller" 00:07:53.041 }' 00:07:53.041 [2024-07-25 13:36:50.015267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.041 [2024-07-25 13:36:50.015298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.041 [2024-07-25 13:36:50.023306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.041 [2024-07-25 13:36:50.023353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.041 [2024-07-25 13:36:50.031317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.041 [2024-07-25 13:36:50.031361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.041 [2024-07-25 13:36:50.039339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.041 [2024-07-25 13:36:50.039394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.042 [2024-07-25 13:36:50.047353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.042 [2024-07-25 13:36:50.047377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.042 [2024-07-25 13:36:50.052746] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:53.042 [2024-07-25 13:36:50.052837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489005 ] 00:07:53.042 [2024-07-25 13:36:50.055372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.042 [2024-07-25 13:36:50.055395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.042 [2024-07-25 13:36:50.063394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.042 [2024-07-25 13:36:50.063430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.042 [2024-07-25 13:36:50.071410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.042 [2024-07-25 13:36:50.071432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.079450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.079472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.300 [2024-07-25 13:36:50.087466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.087489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.095479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.095501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.103488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.103509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.111508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.111530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.113632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.300 [2024-07-25 13:36:50.119551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.119582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.127596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.127631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.135577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.135600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.143597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.143619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.151615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.151637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.159636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.159662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.167660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.167683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.175716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.175762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.183732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.183765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.191724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.191746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.199753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.199778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.207768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.207790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.215787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.215810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.223811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.223833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.224636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.300 [2024-07-25 13:36:50.231835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.231858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.239917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.239948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.247915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.247951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.255950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.256014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.263968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.264015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.272006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.272084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.280026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.280103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.287999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.288027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.296037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.296099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.300 [2024-07-25 13:36:50.304126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.300 [2024-07-25 13:36:50.304178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.301 [2024-07-25 13:36:50.312113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.301 [2024-07-25 13:36:50.312146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.301 [2024-07-25 13:36:50.320101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.301 [2024-07-25 13:36:50.320141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.301 [2024-07-25 13:36:50.328123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.301 [2024-07-25 13:36:50.328201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.336174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.336202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.344186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.344211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.352202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.352227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.360223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.360247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.368244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.368268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.376267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.376291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.384285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.384308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.392310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.392350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.400343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.400367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 Running I/O for 5 seconds... 00:07:53.560 [2024-07-25 13:36:50.408367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.408388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.423734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.423770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.434736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.434763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.445712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.445739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.456710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.456738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.469394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.469422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.479369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.479395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.490642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.490668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.503096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.503123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.512560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.512587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.560 [2024-07-25 13:36:50.523909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.560 [2024-07-25 13:36:50.523936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.561 [2024-07-25 13:36:50.534350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.561 [2024-07-25 13:36:50.534377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.561 [2024-07-25 13:36:50.545193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.561 [2024-07-25 13:36:50.545220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.561 [2024-07-25 13:36:50.557837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.561 [2024-07-25 13:36:50.557863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.561 [2024-07-25 13:36:50.567992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.561 [2024-07-25 13:36:50.568018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.561 [2024-07-25 13:36:50.578607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.561 [2024-07-25 13:36:50.578632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.561 [2024-07-25 13:36:50.589259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.561 [2024-07-25 13:36:50.589286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.820 [2024-07-25 13:36:50.600167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.820 [2024-07-25 13:36:50.600201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.820 [2024-07-25 13:36:50.612944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.820 [2024-07-25 13:36:50.612970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.820 [2024-07-25 13:36:50.623246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.820 [2024-07-25 13:36:50.623273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.820 [2024-07-25 13:36:50.633872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.820 [2024-07-25 13:36:50.633897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.820 [2024-07-25 13:36:50.647358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.820 [2024-07-25 13:36:50.647399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.820 [2024-07-25 13:36:50.657522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.820 [2024-07-25 13:36:50.657563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.820 [2024-07-25 13:36:50.667875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.820 [2024-07-25 13:36:50.667901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.820 [2024-07-25 13:36:50.678755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.820 [2024-07-25 13:36:50.678781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.820 [2024-07-25 13:36:50.691113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.820 [2024-07-25 13:36:50.691139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.702951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.702978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.712537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.712563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.724450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.724476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.735491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.735517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.746294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.746321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.756999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.757025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.769763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.769789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.779817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.779843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.790990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.791016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.801973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.801998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.812731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.812757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.823312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.823339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.833865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.833891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.844931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.844957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.821 [2024-07-25 13:36:50.855587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:53.821 [2024-07-25 13:36:50.855613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.079 [2024-07-25 13:36:50.867880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.079 [2024-07-25 13:36:50.867908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.079 [2024-07-25 13:36:50.877995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.878022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.888932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.888958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.899424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.899451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.910437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.910477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.923276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.923315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.933449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.933475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.944088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.944115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.954610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.954636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.965166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.965193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.975891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.975917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.986235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.986261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:50.996899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:50.996925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.007869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.007895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.018599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.018627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.029512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.029539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.040296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.040323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.051639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.051666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.062686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.062712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.075196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.075224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.085349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.085376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.096179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.096206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.080 [2024-07-25 13:36:51.106978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.080 [2024-07-25 13:36:51.107004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.117887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.117915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.128651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.128678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.139669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.139696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.152478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.152504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.162534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.162560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.173204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.173231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.183792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.183817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.194604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.194629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.208126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.208154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.338 [2024-07-25 13:36:51.218212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.338 [2024-07-25 13:36:51.218239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.228954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.228981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.241538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.241564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.251093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.251127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.261709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.261735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.272436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.272463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.285105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.285144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.295541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.295580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.306002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.306028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.316600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.316626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.327325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.327366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.340095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.340128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.350832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.350859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.361830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.361857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.339 [2024-07-25 13:36:51.372041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.339 [2024-07-25 13:36:51.372092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.382775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.382802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.393855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.393881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.404967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.404994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.417962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.417989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.428300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.428331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.438857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.438884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.448981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.449008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.459415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.459449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.469965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.469991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.482431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.482457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.491770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.491796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.503177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.503203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.515849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.515876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.525853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.525879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.535853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.535879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.546778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.546805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.559752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.559778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.569947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.569974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.580771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.580797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.591382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.591408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.601922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.601949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.612734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.612760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.597 [2024-07-25 13:36:51.623166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.597 [2024-07-25 13:36:51.623192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.633823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.633850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.644309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.644336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.655159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.655186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.665738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.665770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.676683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.676710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.687473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.687499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.700930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.700956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.710958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.710984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.721585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.721611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.732415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.732442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.742865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.742890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.753315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.753341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.763317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.763358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.774278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.774304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.785338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.785378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.796161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.796188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.808990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.809016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.820818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.820843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.829984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.830010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.841449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.841475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.853850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.853876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.863605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.863630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.874292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.874326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:54.857 [2024-07-25 13:36:51.885688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:54.857 [2024-07-25 13:36:51.885714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.896453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.896481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.907653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.907679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.918654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.918680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.931071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.931097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.941763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.941788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.952189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.952216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.962800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.962826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.973467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.973493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.984019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.984070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:51.994290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:51.994316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:52.004956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.117 [2024-07-25 13:36:52.004982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.117 [2024-07-25 13:36:52.017595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.017622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.027819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.027845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.038406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.038432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.048858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.048884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.059494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.059520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.069737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.069764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.080382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.080416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.090898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.090925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.101335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.101375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.111850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.111877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.122767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.122793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.135415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.135441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.118 [2024-07-25 13:36:52.145201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.118 [2024-07-25 13:36:52.145228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.378 [2024-07-25 13:36:52.155359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.378 [2024-07-25 13:36:52.155387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.378 [2024-07-25 13:36:52.166001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.378 [2024-07-25 13:36:52.166027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.378 [2024-07-25 13:36:52.178735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.378 [2024-07-25 13:36:52.178762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.378 [2024-07-25 13:36:52.188831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.378 [2024-07-25 13:36:52.188858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.378 [2024-07-25 13:36:52.199500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.378 [2024-07-25 13:36:52.199527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.378 [2024-07-25 13:36:52.210177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.378 [2024-07-25 13:36:52.210204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.378 [2024-07-25 13:36:52.221069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.378 [2024-07-25 13:36:52.221096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.378 [2024-07-25 13:36:52.233435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.233461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.243521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.243548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.254016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.254057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.267496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.267523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.277893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.277918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.288643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.288670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.298852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.298878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.309444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.309470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.322862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.322888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.333344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.333386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.344164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.344198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.355489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.355514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.366402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.366429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.378861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.378887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.388694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.388719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.399763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.399788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.379 [2024-07-25 13:36:52.412013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.379 [2024-07-25 13:36:52.412040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.422024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.422074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.432898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.432925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.445727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.445754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.455922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.455948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.466313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.466355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.476849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.476875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.487508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.487535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.498236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.498264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.509195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.509223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.519985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.520011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.530685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.530711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.543398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.543425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.553299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.553326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.563901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.563927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.574329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.574356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.584796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.584822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.595687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.595714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.606470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.606497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.617268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.617295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.630008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.630034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.639926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.639952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.650568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.650594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.661121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.661148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.639 [2024-07-25 13:36:52.671549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.639 [2024-07-25 13:36:52.671575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.682134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.682162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.692708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.692734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.703187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.703214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.714072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.714113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.724838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.724865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.737117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.737145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.747112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.747139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.757225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.757252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.768133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.768160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.782025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.782075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.792213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.792240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.802721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.802747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.813234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.813261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.825594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.825620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.835430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.835456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.845948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.845974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.858644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.858670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.870158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.870185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.878374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.878400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.890361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.890387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.903186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.903220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.913424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.913450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:55.898 [2024-07-25 13:36:52.924197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:55.898 [2024-07-25 13:36:52.924224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:52.937480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:52.937507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:52.947692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:52.947718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:52.958407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:52.958433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:52.970971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:52.970997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:52.980935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:52.980961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:52.991694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:52.991720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.002498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.002525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.013443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.013469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.024465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.024491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.035368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.035411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.045849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.045876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.056785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.056812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.070175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.070202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.080575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.080602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.091374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.091400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.104001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.104027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.114241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.114275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.124828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.124853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.135936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.135962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.146774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.146800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.159243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.159270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.170939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.170965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.180851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.180877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.158 [2024-07-25 13:36:53.191930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.158 [2024-07-25 13:36:53.191956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.202893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.202919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.213625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.213651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.224491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.224517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.234962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.234988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.248822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.248849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.259409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.259437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.270055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.270103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.281025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.281074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.292145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.292172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.305473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.305500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.317349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.317389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.326724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.326757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.338390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.338417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.349010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.349050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.360300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.360327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.370922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.370949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.381673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.381699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.394158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.394190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.404010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.404050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.414880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.414906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.425418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.425444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.436319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.436347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.418 [2024-07-25 13:36:53.446836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.418 [2024-07-25 13:36:53.446862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.676 [2024-07-25 13:36:53.457229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.676 [2024-07-25 13:36:53.457257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.676 [2024-07-25 13:36:53.467957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.676 [2024-07-25 13:36:53.467983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.676 [2024-07-25 13:36:53.480738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.676 [2024-07-25 13:36:53.480765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.676 [2024-07-25 13:36:53.491056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.676 [2024-07-25 13:36:53.491091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.676 [2024-07-25 13:36:53.501631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.676 [2024-07-25 13:36:53.501656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.512410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.512450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.523266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.523292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.534299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.534348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.546744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.546770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.556876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.556902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.567214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.567241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.577640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.577666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.588525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.588552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.601636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.601662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.611592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.611618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.622607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.622634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.636368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.636395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.646566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.646592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.657068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.657095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.667959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.667986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.678729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.678755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.689263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.689290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.700233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.700261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.677 [2024-07-25 13:36:53.710524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.677 [2024-07-25 13:36:53.710553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.721444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.721471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.734228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.734256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.746147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.746183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.756420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.756447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.767668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.767694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.778264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.778291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.788997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.789023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.801710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.801736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.811929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.811955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.822325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.822366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.833017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.833066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.843396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.843423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.854090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.854117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.864729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.864754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.877484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.877511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.887616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.887642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.898135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.898161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.908874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.908900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.919599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.919625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.930269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.930295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.940686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.940712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.951248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.951275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:56.935 [2024-07-25 13:36:53.961726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:56.935 [2024-07-25 13:36:53.961752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:53.972664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.193 [2024-07-25 13:36:53.972691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:53.985056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.193 [2024-07-25 13:36:53.985091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:53.994645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.193 [2024-07-25 13:36:53.994671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:54.006287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.193 [2024-07-25 13:36:54.006323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:54.018710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.193 [2024-07-25 13:36:54.018737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:54.029407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.193 [2024-07-25 13:36:54.029433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:54.040248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.193 [2024-07-25 13:36:54.040274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:54.052860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.193 [2024-07-25 13:36:54.052886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:54.063099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.193 [2024-07-25 13:36:54.063127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.193 [2024-07-25 13:36:54.073987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.074013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.086316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.086343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.096378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.096404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.107022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.107073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.118025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.118074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.128934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.128960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.141115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.141141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.151300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.151327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.161791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.161817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.171945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.171971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.182741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.182766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.193410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.193436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.203490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.203516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.213722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.213748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.194 [2024-07-25 13:36:54.224470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.194 [2024-07-25 13:36:54.224497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.237025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.237075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.247414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.247455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.258003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.258028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.269111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.269138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.279557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.279583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.290068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.290094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.300568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.300595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.314500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.314528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.324994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.325020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.335561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.335587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.346551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.346578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.357517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.357543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.370114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.370141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.380586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.380612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.391441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.391467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.404158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.404185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.414330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.414371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.425501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.425542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.438002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.438028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.448538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.448564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.459434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.459460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.472252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.472279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.452 [2024-07-25 13:36:54.481939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.452 [2024-07-25 13:36:54.481964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.492635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.492663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.505943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.505969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.517759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.517785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.527426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.527452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.538814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.538841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.551335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.551363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.560974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.561001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.571686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.571712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.584072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.584100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.594146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.594174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.604748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.604774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.615516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.615542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.628286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.628313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.640173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.640200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.649249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.649275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.660846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.660872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.671456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.671482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.682022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.682072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.692674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.692700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.703485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.703512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.716101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.716127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.726000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.726027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.712 [2024-07-25 13:36:54.736736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.712 [2024-07-25 13:36:54.736762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.747650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.747679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.760140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.760169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.770200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.770227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.780856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.780889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.791665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.791705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.802431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.802457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.813221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.813247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.831074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.831104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.841280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.841308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.851878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.851905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.862941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.862967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.873427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.873455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.883880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.883907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.894243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.894271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.905260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.905287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.916149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.916176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.926926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.926952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.937524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.937550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.948458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.948498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.959156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.959182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.969828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.969854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.982312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.982355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:54.992686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:54.992722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:57.973 [2024-07-25 13:36:55.003397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:57.973 [2024-07-25 13:36:55.003423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.233 [2024-07-25 13:36:55.016690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.233 [2024-07-25 13:36:55.016717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.233 [2024-07-25 13:36:55.026887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.233 [2024-07-25 13:36:55.026912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.233 [2024-07-25 13:36:55.037796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.233 [2024-07-25 13:36:55.037821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.233 [2024-07-25 13:36:55.050801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.233 [2024-07-25 13:36:55.050827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.233 [2024-07-25 13:36:55.062174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.233 [2024-07-25 13:36:55.062201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.233 [2024-07-25 13:36:55.071290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.233 [2024-07-25 13:36:55.071316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.233 [2024-07-25 13:36:55.082968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.233 [2024-07-25 13:36:55.082994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.233 [2024-07-25 13:36:55.093503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.233 [2024-07-25 13:36:55.093528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.233 [2024-07-25 13:36:55.104235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.233 [2024-07-25 13:36:55.104262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.116434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.116460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.126054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.126089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.136603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.136629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.147280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.147307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.157728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.157754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.168679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.168704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.180983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.181010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.191102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.191129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.201940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.201974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.214211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.214238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.223943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.223969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.236363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.236403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.246477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.246504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.256837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.256863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.234 [2024-07-25 13:36:55.267616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.234 [2024-07-25 13:36:55.267644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.280249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.493 [2024-07-25 13:36:55.280276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.290246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.493 [2024-07-25 13:36:55.290272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.301097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.493 [2024-07-25 13:36:55.301135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.313730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.493 [2024-07-25 13:36:55.313757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.324110] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.493 [2024-07-25 13:36:55.324138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.334652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.493 [2024-07-25 13:36:55.334679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.345194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.493 [2024-07-25 13:36:55.345230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.355833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.493 [2024-07-25 13:36:55.355860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.366479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.493 [2024-07-25 13:36:55.366506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.493 [2024-07-25 13:36:55.377120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.377147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.389503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.389545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.399664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.399690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.410362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.410398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.421066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.421093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.427152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.427178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 00:07:58.494 Latency(us) 00:07:58.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.494 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:07:58.494 Nvme1n1 : 5.01 11884.99 92.85 0.00 0.00 10754.91 4611.79 25437.68 00:07:58.494 =================================================================================================================== 00:07:58.494 Total : 11884.99 92.85 0.00 0.00 10754.91 4611.79 25437.68 00:07:58.494 [2024-07-25 13:36:55.435171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.435196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.443198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.443223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.451238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.451266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.459309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.459357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.467329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.467376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.475351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.475398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.483372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.483418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.491398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.491445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.499459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.499506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.507438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.507484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.515459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.515509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.494 [2024-07-25 13:36:55.523490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.494 [2024-07-25 13:36:55.523536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.531509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.531556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.539540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.539587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.547544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.547591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.555565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.555611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.563585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.563630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.571578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.571615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.579562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.579582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.587584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.587603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.595606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.595626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.603662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.603684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.611723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.611772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.619734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.619781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.627738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.627774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.635714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.635735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.643737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.643758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.651755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.651775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.659792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.659817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.667870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.667918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.675884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.675929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.683841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.683861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.691861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.691881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 [2024-07-25 13:36:55.699886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:58.754 [2024-07-25 13:36:55.699909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:58.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (489005) - No such process 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 489005 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:58.754 delay0 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.754 13:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:07:58.754 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.012 [2024-07-25 13:36:55.818353] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:05.624 Initializing NVMe Controllers 00:08:05.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:05.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:05.624 Initialization complete. Launching workers. 00:08:05.624 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 138 00:08:05.624 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 425, failed to submit 33 00:08:05.624 success 293, unsuccess 132, failed 0 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.624 rmmod nvme_tcp 00:08:05.624 rmmod nvme_fabrics 00:08:05.624 rmmod nvme_keyring 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 487676 ']' 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 487676 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 487676 ']' 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 487676 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487676 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487676' 00:08:05.624 killing process with pid 487676 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 487676 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 487676 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.624 13:37:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.531 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:07.531 00:08:07.531 real 0m27.847s 00:08:07.531 user 0m41.035s 00:08:07.531 sys 0m8.297s 00:08:07.531 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.531 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.531 ************************************ 00:08:07.531 END TEST nvmf_zcopy 00:08:07.531 ************************************ 00:08:07.531 13:37:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:07.531 13:37:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:07.531 13:37:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.531 13:37:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.531 ************************************ 00:08:07.531 START TEST nvmf_nmic 00:08:07.531 ************************************ 00:08:07.531 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:07.531 * Looking for test storage... 00:08:07.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:08:07.532 13:37:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:10.066 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:10.066 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:10.066 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:10.066 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.066 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:08:10.067 00:08:10.067 --- 10.0.0.2 ping statistics --- 00:08:10.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.067 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:08:10.067 00:08:10.067 --- 10.0.0.1 ping statistics --- 00:08:10.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.067 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=492396 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 492396 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 492396 ']' 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.067 13:37:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.067 [2024-07-25 13:37:06.873818] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:10.067 [2024-07-25 13:37:06.873910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.067 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.067 [2024-07-25 13:37:06.938001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.067 [2024-07-25 13:37:07.042938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.067 [2024-07-25 13:37:07.042997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.067 [2024-07-25 13:37:07.043022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.067 [2024-07-25 13:37:07.043033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.067 [2024-07-25 13:37:07.043043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.067 [2024-07-25 13:37:07.043124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.067 [2024-07-25 13:37:07.043210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.067 [2024-07-25 13:37:07.043257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.067 [2024-07-25 13:37:07.043260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.326 [2024-07-25 13:37:07.203697] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.326 Malloc0 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.326 [2024-07-25 13:37:07.255209] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:10.326 test case1: single bdev can't be used in multiple subsystems 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.326 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.327 [2024-07-25 13:37:07.279030] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:10.327 [2024-07-25 13:37:07.279083] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:10.327 [2024-07-25 13:37:07.279109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.327 request: 00:08:10.327 { 00:08:10.327 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:10.327 "namespace": { 00:08:10.327 "bdev_name": "Malloc0", 00:08:10.327 "no_auto_visible": false 00:08:10.327 }, 00:08:10.327 "method": "nvmf_subsystem_add_ns", 00:08:10.327 "req_id": 1 00:08:10.327 } 00:08:10.327 Got JSON-RPC error response 00:08:10.327 response: 00:08:10.327 { 00:08:10.327 "code": -32602, 00:08:10.327 "message": "Invalid parameters" 00:08:10.327 } 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:10.327 Adding namespace failed - expected result. 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:10.327 test case2: host connect to nvmf target in multiple paths 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.327 [2024-07-25 13:37:07.287167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.327 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:11.266 13:37:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:11.523 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.523 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:11.523 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.523 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:11.523 13:37:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:14.062 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:14.062 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:14.062 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.062 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:14.062 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.062 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:14.062 13:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:14.062 [global] 00:08:14.062 thread=1 00:08:14.062 invalidate=1 00:08:14.062 rw=write 00:08:14.062 time_based=1 00:08:14.062 runtime=1 00:08:14.062 ioengine=libaio 00:08:14.062 direct=1 00:08:14.062 bs=4096 00:08:14.062 iodepth=1 00:08:14.062 norandommap=0 00:08:14.062 numjobs=1 00:08:14.062 00:08:14.062 verify_dump=1 00:08:14.062 verify_backlog=512 00:08:14.062 verify_state_save=0 00:08:14.062 do_verify=1 00:08:14.062 verify=crc32c-intel 00:08:14.062 [job0] 00:08:14.062 filename=/dev/nvme0n1 00:08:14.062 Could not set queue depth (nvme0n1) 00:08:14.062 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:14.062 fio-3.35 00:08:14.062 Starting 1 thread 00:08:15.000 00:08:15.000 job0: (groupid=0, jobs=1): err= 0: pid=492962: Thu Jul 25 13:37:11 2024 00:08:15.000 read: IOPS=520, BW=2083KiB/s (2133kB/s)(2116KiB/1016msec) 00:08:15.000 slat (nsec): min=4588, max=33241, avg=12889.86, stdev=7636.17 00:08:15.000 clat (usec): min=165, max=42354, avg=1471.81, stdev=7147.99 00:08:15.000 lat (usec): min=170, max=42371, avg=1484.69, stdev=7150.54 00:08:15.000 clat percentiles (usec): 00:08:15.000 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 194], 00:08:15.000 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:08:15.000 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 239], 95.00th=[ 404], 00:08:15.000 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:15.000 | 99.99th=[42206] 00:08:15.000 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:08:15.000 slat (usec): min=6, max=26877, avg=43.23, stdev=839.41 00:08:15.000 clat (usec): min=121, max=395, avg=174.63, stdev=44.31 00:08:15.000 lat (usec): min=128, max=27128, avg=217.86, stdev=842.91 00:08:15.000 clat percentiles (usec): 00:08:15.000 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:08:15.000 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 161], 00:08:15.000 | 70.00th=[ 176], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 253], 00:08:15.000 | 99.00th=[ 281], 99.50th=[ 367], 99.90th=[ 383], 99.95th=[ 396], 00:08:15.000 | 99.99th=[ 396] 00:08:15.000 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:08:15.000 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:15.000 lat (usec) : 250=91.69%, 500=7.21%, 750=0.06% 00:08:15.000 lat (msec) : 50=1.03% 00:08:15.000 cpu : usr=1.48%, sys=2.17%, ctx=1556, majf=0, minf=2 00:08:15.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:15.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:15.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:15.000 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:15.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:15.000 00:08:15.000 Run status group 0 (all jobs): 00:08:15.000 READ: bw=2083KiB/s (2133kB/s), 2083KiB/s-2083KiB/s (2133kB/s-2133kB/s), io=2116KiB (2167kB), run=1016-1016msec 00:08:15.000 WRITE: bw=4031KiB/s (4128kB/s), 4031KiB/s-4031KiB/s (4128kB/s-4128kB/s), io=4096KiB (4194kB), run=1016-1016msec 00:08:15.000 00:08:15.000 Disk stats (read/write): 00:08:15.000 nvme0n1: ios=552/1024, merge=0/0, ticks=1638/170, in_queue=1808, util=98.50% 00:08:15.000 13:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.261 rmmod nvme_tcp 00:08:15.261 rmmod nvme_fabrics 00:08:15.261 rmmod nvme_keyring 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 492396 ']' 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 492396 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 492396 ']' 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 492396 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 492396 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 492396' 00:08:15.261 killing process with pid 492396 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 492396 00:08:15.261 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 492396 00:08:15.561 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.561 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.561 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.561 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.561 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.561 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.561 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.561 13:37:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.469 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:17.469 00:08:17.469 real 0m9.994s 00:08:17.469 user 0m22.269s 00:08:17.469 sys 0m2.434s 00:08:17.469 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.469 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:17.469 ************************************ 00:08:17.469 END TEST nvmf_nmic 00:08:17.469 ************************************ 00:08:17.469 13:37:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:17.469 13:37:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:17.469 13:37:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.469 13:37:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.728 ************************************ 00:08:17.728 START TEST nvmf_fio_target 00:08:17.728 ************************************ 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:17.728 * Looking for test storage... 00:08:17.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.728 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.729 13:37:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.633 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.633 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.633 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:08:19.634 00:08:19.634 --- 10.0.0.2 ping statistics --- 00:08:19.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.634 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:08:19.634 00:08:19.634 --- 10.0.0.1 ping statistics --- 00:08:19.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.634 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=495115 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 495115 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 495115 ']' 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.634 13:37:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:19.895 [2024-07-25 13:37:16.703757] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:19.895 [2024-07-25 13:37:16.703841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.895 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.895 [2024-07-25 13:37:16.771445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.895 [2024-07-25 13:37:16.881923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.895 [2024-07-25 13:37:16.881973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.895 [2024-07-25 13:37:16.881986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.895 [2024-07-25 13:37:16.881998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.895 [2024-07-25 13:37:16.882007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.895 [2024-07-25 13:37:16.882080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.895 [2024-07-25 13:37:16.882151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.895 [2024-07-25 13:37:16.882219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.895 [2024-07-25 13:37:16.882223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.154 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.154 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:08:20.154 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:20.154 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.154 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:20.154 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.154 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:20.412 [2024-07-25 13:37:17.276358] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.412 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:20.670 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:20.670 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:20.928 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:20.928 13:37:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.186 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:21.186 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.444 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:21.444 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:21.701 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.959 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:21.959 13:37:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.217 13:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:22.217 13:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.475 13:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:22.475 13:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:22.732 13:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.988 13:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:22.988 13:37:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.246 13:37:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:23.246 13:37:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:23.502 13:37:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.759 [2024-07-25 13:37:20.636088] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.759 13:37:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:24.016 13:37:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:24.275 13:37:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.844 13:37:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:24.844 13:37:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:08:24.844 13:37:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.844 13:37:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:08:24.844 13:37:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:08:24.844 13:37:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:08:27.377 13:37:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:27.377 13:37:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:27.378 13:37:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.378 13:37:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:08:27.378 13:37:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.378 13:37:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:08:27.378 13:37:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:27.378 [global] 00:08:27.378 thread=1 00:08:27.378 invalidate=1 00:08:27.378 rw=write 00:08:27.378 time_based=1 00:08:27.378 runtime=1 00:08:27.378 ioengine=libaio 00:08:27.378 direct=1 00:08:27.378 bs=4096 00:08:27.378 iodepth=1 00:08:27.378 norandommap=0 00:08:27.378 numjobs=1 00:08:27.378 00:08:27.378 verify_dump=1 00:08:27.378 verify_backlog=512 00:08:27.378 verify_state_save=0 00:08:27.378 do_verify=1 00:08:27.378 verify=crc32c-intel 00:08:27.378 [job0] 00:08:27.378 filename=/dev/nvme0n1 00:08:27.378 [job1] 00:08:27.378 filename=/dev/nvme0n2 00:08:27.378 [job2] 00:08:27.378 filename=/dev/nvme0n3 00:08:27.378 [job3] 00:08:27.378 filename=/dev/nvme0n4 00:08:27.378 Could not set queue depth (nvme0n1) 00:08:27.378 Could not set queue depth (nvme0n2) 00:08:27.378 Could not set queue depth (nvme0n3) 00:08:27.378 Could not set queue depth (nvme0n4) 00:08:27.378 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:27.378 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:27.378 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:27.378 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:27.378 fio-3.35 00:08:27.378 Starting 4 threads 00:08:28.315 00:08:28.315 job0: (groupid=0, jobs=1): err= 0: pid=496090: Thu Jul 25 13:37:25 2024 00:08:28.315 read: IOPS=1017, BW=4071KiB/s (4169kB/s)(4120KiB/1012msec) 00:08:28.315 slat (nsec): min=5794, max=41873, avg=12308.48, stdev=8317.91 00:08:28.315 clat (usec): min=170, max=42141, avg=679.55, stdev=3982.55 00:08:28.315 lat (usec): min=177, max=42148, avg=691.86, stdev=3983.83 00:08:28.315 clat percentiles (usec): 00:08:28.315 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:08:28.315 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 233], 60.00th=[ 277], 00:08:28.315 | 70.00th=[ 302], 80.00th=[ 433], 90.00th=[ 482], 95.00th=[ 498], 00:08:28.315 | 99.00th=[ 2769], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:28.315 | 99.99th=[42206] 00:08:28.315 write: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec); 0 zone resets 00:08:28.315 slat (nsec): min=7534, max=46340, avg=13610.12, stdev=5494.07 00:08:28.315 clat (usec): min=121, max=410, avg=174.98, stdev=49.70 00:08:28.315 lat (usec): min=130, max=422, avg=188.59, stdev=50.56 00:08:28.315 clat percentiles (usec): 00:08:28.315 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 135], 00:08:28.315 | 30.00th=[ 141], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 169], 00:08:28.315 | 70.00th=[ 190], 80.00th=[ 219], 90.00th=[ 245], 95.00th=[ 281], 00:08:28.315 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 404], 99.95th=[ 412], 00:08:28.315 | 99.99th=[ 412] 00:08:28.315 bw ( KiB/s): min= 5232, max= 7056, per=34.60%, avg=6144.00, stdev=1289.76, samples=2 00:08:28.315 iops : min= 1308, max= 1764, avg=1536.00, stdev=322.44, samples=2 00:08:28.315 lat (usec) : 250=76.42%, 500=21.59%, 750=1.56% 00:08:28.315 lat (msec) : 4=0.04%, 50=0.39% 00:08:28.315 cpu : usr=2.27%, sys=2.87%, ctx=2567, majf=0, minf=2 00:08:28.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.315 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:28.315 job1: (groupid=0, jobs=1): err= 0: pid=496091: Thu Jul 25 13:37:25 2024 00:08:28.315 read: IOPS=1013, BW=4055KiB/s (4152kB/s)(4128KiB/1018msec) 00:08:28.315 slat (nsec): min=5348, max=65870, avg=11204.09, stdev=7282.23 00:08:28.315 clat (usec): min=163, max=41995, avg=695.89, stdev=4217.36 00:08:28.315 lat (usec): min=169, max=42025, avg=707.09, stdev=4218.96 00:08:28.315 clat percentiles (usec): 00:08:28.315 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:08:28.315 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 237], 00:08:28.315 | 70.00th=[ 273], 80.00th=[ 302], 90.00th=[ 429], 95.00th=[ 498], 00:08:28.315 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:28.315 | 99.99th=[42206] 00:08:28.315 write: IOPS=1508, BW=6035KiB/s (6180kB/s)(6144KiB/1018msec); 0 zone resets 00:08:28.315 slat (nsec): min=7107, max=52326, avg=11804.58, stdev=5209.06 00:08:28.315 clat (usec): min=117, max=406, avg=170.12, stdev=35.11 00:08:28.315 lat (usec): min=125, max=417, avg=181.92, stdev=36.13 00:08:28.315 clat percentiles (usec): 00:08:28.315 | 1.00th=[ 126], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 149], 00:08:28.315 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:08:28.315 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 200], 95.00th=[ 247], 00:08:28.315 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 404], 99.95th=[ 408], 00:08:28.315 | 99.99th=[ 408] 00:08:28.315 bw ( KiB/s): min= 4096, max= 8192, per=34.60%, avg=6144.00, stdev=2896.31, samples=2 00:08:28.315 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:08:28.315 lat (usec) : 250=82.98%, 500=15.38%, 750=1.21% 00:08:28.315 lat (msec) : 50=0.43% 00:08:28.315 cpu : usr=1.77%, sys=2.75%, ctx=2569, majf=0, minf=1 00:08:28.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.315 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:28.315 job2: (groupid=0, jobs=1): err= 0: pid=496092: Thu Jul 25 13:37:25 2024 00:08:28.315 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:08:28.315 slat (nsec): min=8842, max=15872, avg=14001.59, stdev=2113.41 00:08:28.315 clat (usec): min=357, max=42078, avg=39563.57, stdev=8773.17 00:08:28.315 lat (usec): min=372, max=42089, avg=39577.58, stdev=8772.93 00:08:28.315 clat percentiles (usec): 00:08:28.315 | 1.00th=[ 359], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:28.315 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:08:28.315 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:28.315 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:28.315 | 99.99th=[42206] 00:08:28.315 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:08:28.315 slat (nsec): min=7676, max=68181, avg=10174.95, stdev=3685.02 00:08:28.315 clat (usec): min=191, max=310, avg=242.80, stdev=21.44 00:08:28.315 lat (usec): min=203, max=371, avg=252.98, stdev=21.66 00:08:28.315 clat percentiles (usec): 00:08:28.315 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 231], 00:08:28.315 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:08:28.315 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 281], 95.00th=[ 285], 00:08:28.315 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 310], 99.95th=[ 310], 00:08:28.315 | 99.99th=[ 310] 00:08:28.315 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:08:28.315 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:28.315 lat (usec) : 250=75.47%, 500=20.60% 00:08:28.315 lat (msec) : 50=3.93% 00:08:28.316 cpu : usr=0.10%, sys=0.60%, ctx=535, majf=0, minf=1 00:08:28.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.316 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:28.316 job3: (groupid=0, jobs=1): err= 0: pid=496093: Thu Jul 25 13:37:25 2024 00:08:28.316 read: IOPS=640, BW=2563KiB/s (2624kB/s)(2660KiB/1038msec) 00:08:28.316 slat (nsec): min=5627, max=51410, avg=12108.69, stdev=6691.28 00:08:28.316 clat (usec): min=193, max=41975, avg=1203.96, stdev=6128.27 00:08:28.316 lat (usec): min=200, max=41998, avg=1216.07, stdev=6130.42 00:08:28.316 clat percentiles (usec): 00:08:28.316 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:08:28.316 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 243], 00:08:28.316 | 70.00th=[ 258], 80.00th=[ 338], 90.00th=[ 453], 95.00th=[ 478], 00:08:28.316 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:28.316 | 99.99th=[42206] 00:08:28.316 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:08:28.316 slat (nsec): min=7382, max=50707, avg=14619.62, stdev=5811.50 00:08:28.316 clat (usec): min=154, max=378, avg=202.80, stdev=39.47 00:08:28.316 lat (usec): min=163, max=411, avg=217.42, stdev=39.04 00:08:28.316 clat percentiles (usec): 00:08:28.316 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:08:28.316 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 200], 00:08:28.316 | 70.00th=[ 210], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 289], 00:08:28.316 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 367], 99.95th=[ 379], 00:08:28.316 | 99.99th=[ 379] 00:08:28.316 bw ( KiB/s): min= 8192, max= 8192, per=46.13%, avg=8192.00, stdev= 0.00, samples=1 00:08:28.316 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:28.316 lat (usec) : 250=80.82%, 500=18.12%, 750=0.12% 00:08:28.316 lat (msec) : 2=0.06%, 50=0.89% 00:08:28.316 cpu : usr=1.16%, sys=2.22%, ctx=1689, majf=0, minf=1 00:08:28.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:28.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:28.316 issued rwts: total=665,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:28.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:28.316 00:08:28.316 Run status group 0 (all jobs): 00:08:28.316 READ: bw=10.3MiB/s (10.8MB/s), 87.8KiB/s-4071KiB/s (89.9kB/s-4169kB/s), io=10.7MiB (11.3MB), run=1002-1038msec 00:08:28.316 WRITE: bw=17.3MiB/s (18.2MB/s), 2044KiB/s-6071KiB/s (2093kB/s-6217kB/s), io=18.0MiB (18.9MB), run=1002-1038msec 00:08:28.316 00:08:28.316 Disk stats (read/write): 00:08:28.316 nvme0n1: ios=1053/1536, merge=0/0, ticks=1500/244, in_queue=1744, util=98.00% 00:08:28.316 nvme0n2: ios=1051/1536, merge=0/0, ticks=1490/255, in_queue=1745, util=98.17% 00:08:28.316 nvme0n3: ios=43/512, merge=0/0, ticks=1690/123, in_queue=1813, util=97.91% 00:08:28.316 nvme0n4: ios=660/1024, merge=0/0, ticks=588/204, in_queue=792, util=89.67% 00:08:28.574 13:37:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:28.574 [global] 00:08:28.574 thread=1 00:08:28.574 invalidate=1 00:08:28.574 rw=randwrite 00:08:28.574 time_based=1 00:08:28.574 runtime=1 00:08:28.574 ioengine=libaio 00:08:28.574 direct=1 00:08:28.574 bs=4096 00:08:28.574 iodepth=1 00:08:28.574 norandommap=0 00:08:28.574 numjobs=1 00:08:28.574 00:08:28.574 verify_dump=1 00:08:28.574 verify_backlog=512 00:08:28.574 verify_state_save=0 00:08:28.574 do_verify=1 00:08:28.574 verify=crc32c-intel 00:08:28.574 [job0] 00:08:28.574 filename=/dev/nvme0n1 00:08:28.574 [job1] 00:08:28.574 filename=/dev/nvme0n2 00:08:28.574 [job2] 00:08:28.574 filename=/dev/nvme0n3 00:08:28.574 [job3] 00:08:28.574 filename=/dev/nvme0n4 00:08:28.574 Could not set queue depth (nvme0n1) 00:08:28.574 Could not set queue depth (nvme0n2) 00:08:28.574 Could not set queue depth (nvme0n3) 00:08:28.574 Could not set queue depth (nvme0n4) 00:08:28.574 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.574 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.574 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.574 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.574 fio-3.35 00:08:28.574 Starting 4 threads 00:08:29.950 00:08:29.950 job0: (groupid=0, jobs=1): err= 0: pid=496437: Thu Jul 25 13:37:26 2024 00:08:29.950 read: IOPS=1030, BW=4124KiB/s (4223kB/s)(4128KiB/1001msec) 00:08:29.950 slat (nsec): min=5534, max=39523, avg=12125.00, stdev=5571.30 00:08:29.950 clat (usec): min=171, max=42192, avg=650.51, stdev=4206.45 00:08:29.950 lat (usec): min=179, max=42209, avg=662.64, stdev=4206.59 00:08:29.950 clat percentiles (usec): 00:08:29.950 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 192], 00:08:29.950 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 225], 00:08:29.950 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 251], 00:08:29.950 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:29.950 | 99.99th=[42206] 00:08:29.950 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:29.950 slat (nsec): min=6964, max=54949, avg=16019.64, stdev=7184.92 00:08:29.950 clat (usec): min=123, max=614, avg=183.01, stdev=45.56 00:08:29.950 lat (usec): min=130, max=644, avg=199.02, stdev=48.63 00:08:29.950 clat percentiles (usec): 00:08:29.950 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:08:29.950 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 182], 00:08:29.950 | 70.00th=[ 202], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 251], 00:08:29.950 | 99.00th=[ 322], 99.50th=[ 392], 99.90th=[ 611], 99.95th=[ 611], 00:08:29.950 | 99.99th=[ 611] 00:08:29.950 bw ( KiB/s): min= 6952, max= 6952, per=42.86%, avg=6952.00, stdev= 0.00, samples=1 00:08:29.950 iops : min= 1738, max= 1738, avg=1738.00, stdev= 0.00, samples=1 00:08:29.950 lat (usec) : 250=94.24%, 500=5.22%, 750=0.12% 00:08:29.950 lat (msec) : 50=0.43% 00:08:29.950 cpu : usr=3.00%, sys=4.60%, ctx=2569, majf=0, minf=1 00:08:29.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.950 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.950 job1: (groupid=0, jobs=1): err= 0: pid=496439: Thu Jul 25 13:37:26 2024 00:08:29.950 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:08:29.950 slat (nsec): min=13439, max=34090, avg=21312.23, stdev=8793.23 00:08:29.950 clat (usec): min=40886, max=42055, avg=41533.06, stdev=525.26 00:08:29.950 lat (usec): min=40920, max=42071, avg=41554.37, stdev=521.65 00:08:29.950 clat percentiles (usec): 00:08:29.950 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:29.950 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:08:29.950 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:29.950 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:29.950 | 99.99th=[42206] 00:08:29.950 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:08:29.950 slat (nsec): min=6775, max=48025, avg=14726.04, stdev=5284.96 00:08:29.950 clat (usec): min=142, max=262, avg=167.08, stdev=12.47 00:08:29.950 lat (usec): min=155, max=296, avg=181.80, stdev=13.53 00:08:29.950 clat percentiles (usec): 00:08:29.950 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:08:29.950 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:08:29.950 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:08:29.950 | 99.00th=[ 204], 99.50th=[ 253], 99.90th=[ 265], 99.95th=[ 265], 00:08:29.950 | 99.99th=[ 265] 00:08:29.950 bw ( KiB/s): min= 4096, max= 4096, per=25.25%, avg=4096.00, stdev= 0.00, samples=1 00:08:29.950 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:29.950 lat (usec) : 250=95.32%, 500=0.56% 00:08:29.950 lat (msec) : 50=4.12% 00:08:29.950 cpu : usr=0.50%, sys=0.59%, ctx=535, majf=0, minf=1 00:08:29.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.950 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.950 job2: (groupid=0, jobs=1): err= 0: pid=496441: Thu Jul 25 13:37:26 2024 00:08:29.950 read: IOPS=1134, BW=4538KiB/s (4647kB/s)(4556KiB/1004msec) 00:08:29.950 slat (nsec): min=5633, max=49747, avg=13014.53, stdev=5474.52 00:08:29.950 clat (usec): min=187, max=42178, avg=592.70, stdev=3869.70 00:08:29.950 lat (usec): min=194, max=42195, avg=605.71, stdev=3870.03 00:08:29.950 clat percentiles (usec): 00:08:29.950 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:08:29.950 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 235], 00:08:29.950 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 258], 00:08:29.950 | 99.00th=[ 1188], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:08:29.950 | 99.99th=[42206] 00:08:29.950 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:08:29.950 slat (nsec): min=7083, max=55212, avg=14132.02, stdev=6780.78 00:08:29.950 clat (usec): min=135, max=439, avg=182.97, stdev=48.92 00:08:29.950 lat (usec): min=142, max=455, avg=197.10, stdev=53.12 00:08:29.950 clat percentiles (usec): 00:08:29.950 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:08:29.950 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 174], 00:08:29.950 | 70.00th=[ 194], 80.00th=[ 225], 90.00th=[ 249], 95.00th=[ 277], 00:08:29.950 | 99.00th=[ 351], 99.50th=[ 404], 99.90th=[ 424], 99.95th=[ 441], 00:08:29.950 | 99.99th=[ 441] 00:08:29.950 bw ( KiB/s): min= 4096, max= 8192, per=37.88%, avg=6144.00, stdev=2896.31, samples=2 00:08:29.950 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:08:29.950 lat (usec) : 250=89.50%, 500=9.98%, 750=0.04%, 1000=0.04% 00:08:29.950 lat (msec) : 2=0.07%, 50=0.37% 00:08:29.950 cpu : usr=1.60%, sys=6.08%, ctx=2676, majf=0, minf=2 00:08:29.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.950 issued rwts: total=1139,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.950 job3: (groupid=0, jobs=1): err= 0: pid=496442: Thu Jul 25 13:37:26 2024 00:08:29.950 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:08:29.950 slat (nsec): min=8541, max=33622, avg=20467.76, stdev=8521.84 00:08:29.950 clat (usec): min=40927, max=42083, avg=41740.56, stdev=446.77 00:08:29.950 lat (usec): min=40960, max=42091, avg=41761.03, stdev=443.00 00:08:29.950 clat percentiles (usec): 00:08:29.950 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:29.951 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:08:29.951 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:29.951 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:29.951 | 99.99th=[42206] 00:08:29.951 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:08:29.951 slat (nsec): min=6967, max=51570, avg=16341.60, stdev=6968.46 00:08:29.951 clat (usec): min=143, max=389, avg=224.07, stdev=38.37 00:08:29.951 lat (usec): min=153, max=407, avg=240.41, stdev=36.16 00:08:29.951 clat percentiles (usec): 00:08:29.951 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 180], 20.00th=[ 196], 00:08:29.951 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 229], 00:08:29.951 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 297], 00:08:29.951 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 392], 99.95th=[ 392], 00:08:29.951 | 99.99th=[ 392] 00:08:29.951 bw ( KiB/s): min= 4096, max= 4096, per=25.25%, avg=4096.00, stdev= 0.00, samples=1 00:08:29.951 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:29.951 lat (usec) : 250=77.49%, 500=18.57% 00:08:29.951 lat (msec) : 50=3.94% 00:08:29.951 cpu : usr=0.90%, sys=0.30%, ctx=534, majf=0, minf=1 00:08:29.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.951 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.951 00:08:29.951 Run status group 0 (all jobs): 00:08:29.951 READ: bw=8768KiB/s (8979kB/s), 83.7KiB/s-4538KiB/s (85.8kB/s-4647kB/s), io=8856KiB (9069kB), run=1001-1010msec 00:08:29.951 WRITE: bw=15.8MiB/s (16.6MB/s), 2028KiB/s-6138KiB/s (2076kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1010msec 00:08:29.951 00:08:29.951 Disk stats (read/write): 00:08:29.951 nvme0n1: ios=927/1024, merge=0/0, ticks=1547/171, in_queue=1718, util=97.80% 00:08:29.951 nvme0n2: ios=68/512, merge=0/0, ticks=1307/81, in_queue=1388, util=97.97% 00:08:29.951 nvme0n3: ios=1049/1529, merge=0/0, ticks=1496/264, in_queue=1760, util=97.80% 00:08:29.951 nvme0n4: ios=41/512, merge=0/0, ticks=1661/110, in_queue=1771, util=97.78% 00:08:29.951 13:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:29.951 [global] 00:08:29.951 thread=1 00:08:29.951 invalidate=1 00:08:29.951 rw=write 00:08:29.951 time_based=1 00:08:29.951 runtime=1 00:08:29.951 ioengine=libaio 00:08:29.951 direct=1 00:08:29.951 bs=4096 00:08:29.951 iodepth=128 00:08:29.951 norandommap=0 00:08:29.951 numjobs=1 00:08:29.951 00:08:29.951 verify_dump=1 00:08:29.951 verify_backlog=512 00:08:29.951 verify_state_save=0 00:08:29.951 do_verify=1 00:08:29.951 verify=crc32c-intel 00:08:29.951 [job0] 00:08:29.951 filename=/dev/nvme0n1 00:08:29.951 [job1] 00:08:29.951 filename=/dev/nvme0n2 00:08:29.951 [job2] 00:08:29.951 filename=/dev/nvme0n3 00:08:29.951 [job3] 00:08:29.951 filename=/dev/nvme0n4 00:08:29.951 Could not set queue depth (nvme0n1) 00:08:29.951 Could not set queue depth (nvme0n2) 00:08:29.951 Could not set queue depth (nvme0n3) 00:08:29.951 Could not set queue depth (nvme0n4) 00:08:30.209 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:30.209 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:30.210 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:30.210 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:30.210 fio-3.35 00:08:30.210 Starting 4 threads 00:08:31.618 00:08:31.618 job0: (groupid=0, jobs=1): err= 0: pid=496675: Thu Jul 25 13:37:28 2024 00:08:31.618 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:08:31.618 slat (usec): min=3, max=15481, avg=179.76, stdev=1135.54 00:08:31.618 clat (usec): min=9783, max=67243, avg=21234.24, stdev=11885.38 00:08:31.618 lat (usec): min=9788, max=67261, avg=21414.00, stdev=11997.67 00:08:31.618 clat percentiles (usec): 00:08:31.618 | 1.00th=[ 9765], 5.00th=[10028], 10.00th=[12518], 20.00th=[14353], 00:08:31.618 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15926], 60.00th=[16319], 00:08:31.618 | 70.00th=[18482], 80.00th=[31327], 90.00th=[44303], 95.00th=[47449], 00:08:31.618 | 99.00th=[55313], 99.50th=[56886], 99.90th=[67634], 99.95th=[67634], 00:08:31.618 | 99.99th=[67634] 00:08:31.618 write: IOPS=2713, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1011msec); 0 zone resets 00:08:31.618 slat (usec): min=4, max=12424, avg=189.23, stdev=817.40 00:08:31.618 clat (usec): min=4679, max=71226, avg=26843.71, stdev=15179.83 00:08:31.618 lat (usec): min=6107, max=71275, avg=27032.94, stdev=15238.21 00:08:31.618 clat percentiles (usec): 00:08:31.618 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10159], 20.00th=[13829], 00:08:31.618 | 30.00th=[22414], 40.00th=[23200], 50.00th=[23725], 60.00th=[24249], 00:08:31.618 | 70.00th=[24773], 80.00th=[33162], 90.00th=[54264], 95.00th=[65799], 00:08:31.618 | 99.00th=[68682], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:08:31.618 | 99.99th=[70779] 00:08:31.618 bw ( KiB/s): min= 9152, max=11768, per=15.74%, avg=10460.00, stdev=1849.79, samples=2 00:08:31.618 iops : min= 2288, max= 2942, avg=2615.00, stdev=462.45, samples=2 00:08:31.618 lat (msec) : 10=4.47%, 20=43.35%, 50=44.50%, 100=7.67% 00:08:31.618 cpu : usr=3.66%, sys=5.05%, ctx=351, majf=0, minf=1 00:08:31.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:31.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.618 issued rwts: total=2560,2743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.618 job1: (groupid=0, jobs=1): err= 0: pid=496676: Thu Jul 25 13:37:28 2024 00:08:31.618 read: IOPS=5685, BW=22.2MiB/s (23.3MB/s)(22.4MiB/1010msec) 00:08:31.618 slat (usec): min=2, max=9713, avg=89.45, stdev=612.44 00:08:31.618 clat (usec): min=3818, max=22254, avg=11341.21, stdev=2832.86 00:08:31.618 lat (usec): min=3840, max=22269, avg=11430.66, stdev=2868.51 00:08:31.618 clat percentiles (usec): 00:08:31.618 | 1.00th=[ 5014], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9634], 00:08:31.618 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10814], 00:08:31.618 | 70.00th=[11338], 80.00th=[13435], 90.00th=[15795], 95.00th=[17433], 00:08:31.618 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:08:31.618 | 99.99th=[22152] 00:08:31.618 write: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec); 0 zone resets 00:08:31.618 slat (usec): min=3, max=11416, avg=70.69, stdev=377.72 00:08:31.618 clat (usec): min=1583, max=22133, avg=10246.05, stdev=2251.85 00:08:31.618 lat (usec): min=1635, max=22147, avg=10316.74, stdev=2283.65 00:08:31.618 clat percentiles (usec): 00:08:31.618 | 1.00th=[ 3458], 5.00th=[ 5473], 10.00th=[ 6980], 20.00th=[ 9110], 00:08:31.618 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11076], 00:08:31.618 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11731], 00:08:31.618 | 99.00th=[17433], 99.50th=[19006], 99.90th=[20055], 99.95th=[20317], 00:08:31.618 | 99.99th=[22152] 00:08:31.618 bw ( KiB/s): min=24440, max=24576, per=36.87%, avg=24508.00, stdev=96.17, samples=2 00:08:31.618 iops : min= 6110, max= 6144, avg=6127.00, stdev=24.04, samples=2 00:08:31.618 lat (msec) : 2=0.01%, 4=0.90%, 10=25.62%, 20=73.22%, 50=0.25% 00:08:31.618 cpu : usr=7.63%, sys=9.02%, ctx=698, majf=0, minf=1 00:08:31.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:31.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.618 issued rwts: total=5742,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.618 job2: (groupid=0, jobs=1): err= 0: pid=496677: Thu Jul 25 13:37:28 2024 00:08:31.618 read: IOPS=2731, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1009msec) 00:08:31.618 slat (usec): min=3, max=14577, avg=150.18, stdev=970.57 00:08:31.618 clat (usec): min=6424, max=44581, avg=17642.41, stdev=6441.15 00:08:31.618 lat (usec): min=6432, max=44586, avg=17792.58, stdev=6500.22 00:08:31.618 clat percentiles (usec): 00:08:31.618 | 1.00th=[ 6718], 5.00th=[10945], 10.00th=[11994], 20.00th=[13566], 00:08:31.618 | 30.00th=[13829], 40.00th=[14353], 50.00th=[16581], 60.00th=[17695], 00:08:31.618 | 70.00th=[17957], 80.00th=[19792], 90.00th=[27919], 95.00th=[31851], 00:08:31.618 | 99.00th=[40109], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:08:31.618 | 99.99th=[44827] 00:08:31.618 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:08:31.618 slat (usec): min=4, max=21914, avg=181.28, stdev=921.05 00:08:31.618 clat (usec): min=3475, max=61168, avg=24875.17, stdev=9856.38 00:08:31.618 lat (usec): min=3483, max=61189, avg=25056.46, stdev=9926.55 00:08:31.618 clat percentiles (usec): 00:08:31.618 | 1.00th=[ 5014], 5.00th=[10683], 10.00th=[13960], 20.00th=[15926], 00:08:31.618 | 30.00th=[21890], 40.00th=[23200], 50.00th=[23987], 60.00th=[24249], 00:08:31.618 | 70.00th=[26346], 80.00th=[32113], 90.00th=[39060], 95.00th=[42730], 00:08:31.618 | 99.00th=[61080], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:08:31.618 | 99.99th=[61080] 00:08:31.618 bw ( KiB/s): min=12288, max=12288, per=18.49%, avg=12288.00, stdev= 0.00, samples=2 00:08:31.618 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:08:31.618 lat (msec) : 4=0.21%, 10=4.31%, 20=47.20%, 50=47.56%, 100=0.72% 00:08:31.618 cpu : usr=3.17%, sys=6.15%, ctx=383, majf=0, minf=1 00:08:31.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:08:31.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.618 issued rwts: total=2756,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.618 job3: (groupid=0, jobs=1): err= 0: pid=496678: Thu Jul 25 13:37:28 2024 00:08:31.618 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:08:31.618 slat (usec): min=3, max=11852, avg=115.59, stdev=795.07 00:08:31.618 clat (usec): min=4889, max=25524, avg=14281.92, stdev=3517.35 00:08:31.618 lat (usec): min=4896, max=25539, avg=14397.51, stdev=3564.15 00:08:31.618 clat percentiles (usec): 00:08:31.618 | 1.00th=[ 5669], 5.00th=[10421], 10.00th=[11207], 20.00th=[12518], 00:08:31.618 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13304], 60.00th=[13698], 00:08:31.618 | 70.00th=[14484], 80.00th=[15926], 90.00th=[20055], 95.00th=[22152], 00:08:31.618 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:08:31.618 | 99.99th=[25560] 00:08:31.618 write: IOPS=4822, BW=18.8MiB/s (19.8MB/s)(18.9MiB/1004msec); 0 zone resets 00:08:31.618 slat (usec): min=3, max=10747, avg=87.81, stdev=419.33 00:08:31.618 clat (usec): min=1542, max=25181, avg=12707.35, stdev=2612.23 00:08:31.618 lat (usec): min=1570, max=25195, avg=12795.16, stdev=2644.03 00:08:31.618 clat percentiles (usec): 00:08:31.618 | 1.00th=[ 3818], 5.00th=[ 6259], 10.00th=[ 8455], 20.00th=[12125], 00:08:31.618 | 30.00th=[13042], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:08:31.618 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14353], 95.00th=[14484], 00:08:31.618 | 99.00th=[15139], 99.50th=[15401], 99.90th=[25035], 99.95th=[25297], 00:08:31.618 | 99.99th=[25297] 00:08:31.618 bw ( KiB/s): min=17256, max=20464, per=28.37%, avg=18860.00, stdev=2268.40, samples=2 00:08:31.618 iops : min= 4314, max= 5116, avg=4715.00, stdev=567.10, samples=2 00:08:31.618 lat (msec) : 2=0.02%, 4=0.50%, 10=8.42%, 20=85.87%, 50=5.19% 00:08:31.618 cpu : usr=6.18%, sys=7.88%, ctx=613, majf=0, minf=1 00:08:31.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:31.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.618 issued rwts: total=4608,4842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.618 00:08:31.618 Run status group 0 (all jobs): 00:08:31.618 READ: bw=60.5MiB/s (63.5MB/s), 9.89MiB/s-22.2MiB/s (10.4MB/s-23.3MB/s), io=61.2MiB (64.2MB), run=1004-1011msec 00:08:31.618 WRITE: bw=64.9MiB/s (68.1MB/s), 10.6MiB/s-23.8MiB/s (11.1MB/s-24.9MB/s), io=65.6MiB (68.8MB), run=1004-1011msec 00:08:31.618 00:08:31.618 Disk stats (read/write): 00:08:31.618 nvme0n1: ios=2098/2303, merge=0/0, ticks=23403/27620, in_queue=51023, util=86.77% 00:08:31.619 nvme0n2: ios=4791/5120, merge=0/0, ticks=52109/51341, in_queue=103450, util=86.79% 00:08:31.619 nvme0n3: ios=2429/2560, merge=0/0, ticks=41705/59042, in_queue=100747, util=97.91% 00:08:31.619 nvme0n4: ios=3835/4096, merge=0/0, ticks=53236/51052, in_queue=104288, util=89.68% 00:08:31.619 13:37:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:31.619 [global] 00:08:31.619 thread=1 00:08:31.619 invalidate=1 00:08:31.619 rw=randwrite 00:08:31.619 time_based=1 00:08:31.619 runtime=1 00:08:31.619 ioengine=libaio 00:08:31.619 direct=1 00:08:31.619 bs=4096 00:08:31.619 iodepth=128 00:08:31.619 norandommap=0 00:08:31.619 numjobs=1 00:08:31.619 00:08:31.619 verify_dump=1 00:08:31.619 verify_backlog=512 00:08:31.619 verify_state_save=0 00:08:31.619 do_verify=1 00:08:31.619 verify=crc32c-intel 00:08:31.619 [job0] 00:08:31.619 filename=/dev/nvme0n1 00:08:31.619 [job1] 00:08:31.619 filename=/dev/nvme0n2 00:08:31.619 [job2] 00:08:31.619 filename=/dev/nvme0n3 00:08:31.619 [job3] 00:08:31.619 filename=/dev/nvme0n4 00:08:31.619 Could not set queue depth (nvme0n1) 00:08:31.619 Could not set queue depth (nvme0n2) 00:08:31.619 Could not set queue depth (nvme0n3) 00:08:31.619 Could not set queue depth (nvme0n4) 00:08:31.619 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.619 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.619 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.619 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.619 fio-3.35 00:08:31.619 Starting 4 threads 00:08:32.996 00:08:32.996 job0: (groupid=0, jobs=1): err= 0: pid=496905: Thu Jul 25 13:37:29 2024 00:08:32.996 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:08:32.996 slat (usec): min=3, max=5479, avg=111.58, stdev=550.63 00:08:32.996 clat (usec): min=7209, max=24088, avg=14016.22, stdev=3055.78 00:08:32.996 lat (usec): min=7220, max=25131, avg=14127.80, stdev=3101.28 00:08:32.996 clat percentiles (usec): 00:08:32.996 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11207], 00:08:32.996 | 30.00th=[11863], 40.00th=[13566], 50.00th=[13960], 60.00th=[14615], 00:08:32.996 | 70.00th=[15401], 80.00th=[16450], 90.00th=[18220], 95.00th=[19268], 00:08:32.996 | 99.00th=[21365], 99.50th=[22414], 99.90th=[23987], 99.95th=[23987], 00:08:32.996 | 99.99th=[23987] 00:08:32.996 write: IOPS=3874, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1004msec); 0 zone resets 00:08:32.996 slat (usec): min=4, max=7127, avg=142.73, stdev=581.69 00:08:32.996 clat (usec): min=3240, max=34240, avg=19750.65, stdev=6990.99 00:08:32.996 lat (usec): min=3262, max=34263, avg=19893.38, stdev=7037.01 00:08:32.996 clat percentiles (usec): 00:08:32.996 | 1.00th=[ 7046], 5.00th=[10683], 10.00th=[10683], 20.00th=[12125], 00:08:32.996 | 30.00th=[13829], 40.00th=[17695], 50.00th=[20055], 60.00th=[22414], 00:08:32.996 | 70.00th=[24511], 80.00th=[26870], 90.00th=[29492], 95.00th=[30016], 00:08:32.996 | 99.00th=[32637], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:08:32.996 | 99.99th=[34341] 00:08:32.996 bw ( KiB/s): min=13720, max=16384, per=25.58%, avg=15052.00, stdev=1883.73, samples=2 00:08:32.996 iops : min= 3430, max= 4096, avg=3763.00, stdev=470.93, samples=2 00:08:32.996 lat (msec) : 4=0.25%, 10=6.06%, 20=65.48%, 50=28.20% 00:08:32.996 cpu : usr=6.18%, sys=8.37%, ctx=477, majf=0, minf=1 00:08:32.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:32.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.996 issued rwts: total=3584,3890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.996 job1: (groupid=0, jobs=1): err= 0: pid=496906: Thu Jul 25 13:37:29 2024 00:08:32.996 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:08:32.996 slat (usec): min=3, max=14389, avg=128.23, stdev=827.38 00:08:32.996 clat (usec): min=4299, max=67048, avg=14986.09, stdev=8040.62 00:08:32.996 lat (usec): min=4316, max=67062, avg=15114.32, stdev=8123.73 00:08:32.996 clat percentiles (usec): 00:08:32.996 | 1.00th=[ 5604], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10159], 00:08:32.996 | 30.00th=[10945], 40.00th=[12125], 50.00th=[13173], 60.00th=[14091], 00:08:32.996 | 70.00th=[15401], 80.00th=[16188], 90.00th=[23462], 95.00th=[27395], 00:08:32.996 | 99.00th=[56361], 99.50th=[62653], 99.90th=[66847], 99.95th=[66847], 00:08:32.996 | 99.99th=[66847] 00:08:32.996 write: IOPS=3765, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1009msec); 0 zone resets 00:08:32.996 slat (usec): min=4, max=15031, avg=131.91, stdev=728.52 00:08:32.996 clat (usec): min=2770, max=67018, avg=19559.96, stdev=13197.57 00:08:32.996 lat (usec): min=2787, max=67041, avg=19691.87, stdev=13287.41 00:08:32.996 clat percentiles (usec): 00:08:32.996 | 1.00th=[ 4752], 5.00th=[ 6980], 10.00th=[ 8455], 20.00th=[10159], 00:08:32.996 | 30.00th=[10945], 40.00th=[12911], 50.00th=[15926], 60.00th=[17433], 00:08:32.996 | 70.00th=[19792], 80.00th=[25035], 90.00th=[45876], 95.00th=[48497], 00:08:32.996 | 99.00th=[58459], 99.50th=[59507], 99.90th=[61604], 99.95th=[66847], 00:08:32.996 | 99.99th=[66847] 00:08:32.996 bw ( KiB/s): min=10168, max=19248, per=25.00%, avg=14708.00, stdev=6420.53, samples=2 00:08:32.996 iops : min= 2542, max= 4812, avg=3677.00, stdev=1605.13, samples=2 00:08:32.996 lat (msec) : 4=0.30%, 10=18.41%, 20=61.07%, 50=17.76%, 100=2.47% 00:08:32.996 cpu : usr=6.15%, sys=6.94%, ctx=410, majf=0, minf=1 00:08:32.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:08:32.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.996 issued rwts: total=3584,3799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.996 job2: (groupid=0, jobs=1): err= 0: pid=496911: Thu Jul 25 13:37:29 2024 00:08:32.996 read: IOPS=3349, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1005msec) 00:08:32.996 slat (usec): min=2, max=46148, avg=160.94, stdev=1160.40 00:08:32.996 clat (usec): min=2128, max=61823, avg=19173.85, stdev=8791.86 00:08:32.996 lat (usec): min=6747, max=61856, avg=19334.79, stdev=8857.67 00:08:32.996 clat percentiles (usec): 00:08:32.996 | 1.00th=[ 7963], 5.00th=[11076], 10.00th=[12256], 20.00th=[12780], 00:08:32.996 | 30.00th=[13829], 40.00th=[15401], 50.00th=[15795], 60.00th=[17171], 00:08:32.996 | 70.00th=[20317], 80.00th=[24511], 90.00th=[31065], 95.00th=[38011], 00:08:32.996 | 99.00th=[56886], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:08:32.996 | 99.99th=[61604] 00:08:32.996 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:08:32.996 slat (usec): min=3, max=10553, avg=119.33, stdev=592.84 00:08:32.996 clat (usec): min=1000, max=64842, avg=17458.71, stdev=8194.58 00:08:32.996 lat (usec): min=1009, max=64872, avg=17578.03, stdev=8222.07 00:08:32.996 clat percentiles (usec): 00:08:32.996 | 1.00th=[ 4490], 5.00th=[ 9241], 10.00th=[11338], 20.00th=[12256], 00:08:32.996 | 30.00th=[12780], 40.00th=[14746], 50.00th=[15795], 60.00th=[17433], 00:08:32.996 | 70.00th=[20055], 80.00th=[21890], 90.00th=[24249], 95.00th=[25297], 00:08:32.996 | 99.00th=[58983], 99.50th=[58983], 99.90th=[59507], 99.95th=[63701], 00:08:32.996 | 99.99th=[64750] 00:08:32.996 bw ( KiB/s): min=13240, max=15432, per=24.36%, avg=14336.00, stdev=1549.98, samples=2 00:08:32.996 iops : min= 3310, max= 3858, avg=3584.00, stdev=387.49, samples=2 00:08:32.996 lat (msec) : 2=0.33%, 4=0.16%, 10=3.12%, 20=65.05%, 50=29.47% 00:08:32.996 lat (msec) : 100=1.87% 00:08:32.996 cpu : usr=3.49%, sys=5.18%, ctx=365, majf=0, minf=1 00:08:32.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:32.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.996 issued rwts: total=3366,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.996 job3: (groupid=0, jobs=1): err= 0: pid=496912: Thu Jul 25 13:37:29 2024 00:08:32.996 read: IOPS=3479, BW=13.6MiB/s (14.2MB/s)(13.7MiB/1010msec) 00:08:32.996 slat (usec): min=3, max=29379, avg=141.17, stdev=1075.68 00:08:32.997 clat (usec): min=648, max=68297, avg=17932.73, stdev=9521.83 00:08:32.997 lat (usec): min=7972, max=68315, avg=18073.89, stdev=9615.69 00:08:32.997 clat percentiles (usec): 00:08:32.997 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[11469], 20.00th=[11994], 00:08:32.997 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[14091], 00:08:32.997 | 70.00th=[17433], 80.00th=[26608], 90.00th=[32900], 95.00th=[38536], 00:08:32.997 | 99.00th=[51643], 99.50th=[51643], 99.90th=[53216], 99.95th=[65274], 00:08:32.997 | 99.99th=[68682] 00:08:32.997 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:08:32.997 slat (usec): min=4, max=16483, avg=130.59, stdev=849.68 00:08:32.997 clat (usec): min=6222, max=49461, avg=17915.22, stdev=8496.58 00:08:32.997 lat (usec): min=6232, max=49480, avg=18045.81, stdev=8581.50 00:08:32.997 clat percentiles (usec): 00:08:32.997 | 1.00th=[ 8848], 5.00th=[10814], 10.00th=[11731], 20.00th=[11994], 00:08:32.997 | 30.00th=[12387], 40.00th=[13566], 50.00th=[13829], 60.00th=[16188], 00:08:32.997 | 70.00th=[19530], 80.00th=[22938], 90.00th=[32900], 95.00th=[33424], 00:08:32.997 | 99.00th=[48497], 99.50th=[48497], 99.90th=[49021], 99.95th=[49021], 00:08:32.997 | 99.99th=[49546] 00:08:32.997 bw ( KiB/s): min=13040, max=15632, per=24.36%, avg=14336.00, stdev=1832.82, samples=2 00:08:32.997 iops : min= 3260, max= 3908, avg=3584.00, stdev=458.21, samples=2 00:08:32.997 lat (usec) : 750=0.01% 00:08:32.997 lat (msec) : 10=3.87%, 20=72.16%, 50=23.02%, 100=0.93% 00:08:32.997 cpu : usr=5.65%, sys=8.72%, ctx=317, majf=0, minf=1 00:08:32.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:32.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.997 issued rwts: total=3514,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.997 00:08:32.997 Run status group 0 (all jobs): 00:08:32.997 READ: bw=54.3MiB/s (57.0MB/s), 13.1MiB/s-13.9MiB/s (13.7MB/s-14.6MB/s), io=54.9MiB (57.5MB), run=1004-1010msec 00:08:32.997 WRITE: bw=57.5MiB/s (60.3MB/s), 13.9MiB/s-15.1MiB/s (14.5MB/s-15.9MB/s), io=58.0MiB (60.9MB), run=1004-1010msec 00:08:32.997 00:08:32.997 Disk stats (read/write): 00:08:32.997 nvme0n1: ios=3111/3303, merge=0/0, ticks=17823/27515, in_queue=45338, util=98.10% 00:08:32.997 nvme0n2: ios=3087/3327, merge=0/0, ticks=43167/58918, in_queue=102085, util=86.59% 00:08:32.997 nvme0n3: ios=2895/3072, merge=0/0, ticks=20974/17699, in_queue=38673, util=88.91% 00:08:32.997 nvme0n4: ios=2794/3072, merge=0/0, ticks=24605/27589, in_queue=52194, util=98.52% 00:08:32.997 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:32.997 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=497046 00:08:32.997 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:32.997 13:37:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:32.997 [global] 00:08:32.997 thread=1 00:08:32.997 invalidate=1 00:08:32.997 rw=read 00:08:32.997 time_based=1 00:08:32.997 runtime=10 00:08:32.997 ioengine=libaio 00:08:32.997 direct=1 00:08:32.997 bs=4096 00:08:32.997 iodepth=1 00:08:32.997 norandommap=1 00:08:32.997 numjobs=1 00:08:32.997 00:08:32.997 [job0] 00:08:32.997 filename=/dev/nvme0n1 00:08:32.997 [job1] 00:08:32.997 filename=/dev/nvme0n2 00:08:32.997 [job2] 00:08:32.997 filename=/dev/nvme0n3 00:08:32.997 [job3] 00:08:32.997 filename=/dev/nvme0n4 00:08:32.997 Could not set queue depth (nvme0n1) 00:08:32.997 Could not set queue depth (nvme0n2) 00:08:32.997 Could not set queue depth (nvme0n3) 00:08:32.997 Could not set queue depth (nvme0n4) 00:08:32.997 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:32.997 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:32.997 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:32.997 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:32.997 fio-3.35 00:08:32.997 Starting 4 threads 00:08:36.281 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:36.281 13:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:36.281 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=483328, buflen=4096 00:08:36.281 fio: pid=497147, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:36.281 13:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:36.281 13:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:36.281 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1048576, buflen=4096 00:08:36.281 fio: pid=497146, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:36.538 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=55209984, buflen=4096 00:08:36.538 fio: pid=497144, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:36.538 13:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:36.538 13:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:36.795 13:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:36.795 13:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:36.795 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=15077376, buflen=4096 00:08:36.795 fio: pid=497145, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:08:36.795 00:08:36.795 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=497144: Thu Jul 25 13:37:33 2024 00:08:36.795 read: IOPS=3975, BW=15.5MiB/s (16.3MB/s)(52.7MiB/3391msec) 00:08:36.795 slat (usec): min=5, max=26217, avg=16.51, stdev=309.74 00:08:36.795 clat (usec): min=168, max=1091, avg=230.36, stdev=39.25 00:08:36.795 lat (usec): min=174, max=26490, avg=246.87, stdev=314.56 00:08:36.795 clat percentiles (usec): 00:08:36.795 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 200], 00:08:36.795 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:08:36.795 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 289], 00:08:36.795 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 709], 99.95th=[ 807], 00:08:36.795 | 99.99th=[ 930] 00:08:36.795 bw ( KiB/s): min=14872, max=17960, per=83.80%, avg=15941.33, stdev=1050.94, samples=6 00:08:36.795 iops : min= 3718, max= 4490, avg=3985.33, stdev=262.74, samples=6 00:08:36.795 lat (usec) : 250=80.68%, 500=19.03%, 750=0.21%, 1000=0.07% 00:08:36.795 lat (msec) : 2=0.01% 00:08:36.795 cpu : usr=3.16%, sys=7.20%, ctx=13484, majf=0, minf=1 00:08:36.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.796 issued rwts: total=13480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.796 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=497145: Thu Jul 25 13:37:33 2024 00:08:36.796 read: IOPS=998, BW=3993KiB/s (4089kB/s)(14.4MiB/3687msec) 00:08:36.796 slat (usec): min=4, max=11905, avg=18.40, stdev=333.57 00:08:36.796 clat (usec): min=167, max=42060, avg=975.07, stdev=5513.06 00:08:36.796 lat (usec): min=172, max=42070, avg=993.47, stdev=5524.01 00:08:36.796 clat percentiles (usec): 00:08:36.796 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:08:36.796 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 217], 00:08:36.796 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 273], 00:08:36.796 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:08:36.796 | 99.99th=[42206] 00:08:36.796 bw ( KiB/s): min= 96, max= 9656, per=20.88%, avg=3971.57, stdev=4072.30, samples=7 00:08:36.796 iops : min= 24, max= 2414, avg=992.86, stdev=1018.03, samples=7 00:08:36.796 lat (usec) : 250=83.19%, 500=14.69%, 750=0.19% 00:08:36.796 lat (msec) : 4=0.05%, 50=1.85% 00:08:36.796 cpu : usr=0.35%, sys=0.87%, ctx=3688, majf=0, minf=1 00:08:36.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.796 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.796 issued rwts: total=3682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.796 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=497146: Thu Jul 25 13:37:33 2024 00:08:36.796 read: IOPS=81, BW=326KiB/s (334kB/s)(1024KiB/3137msec) 00:08:36.796 slat (usec): min=6, max=7890, avg=40.65, stdev=491.58 00:08:36.796 clat (usec): min=198, max=42019, avg=12120.01, stdev=18572.07 00:08:36.796 lat (usec): min=205, max=48982, avg=12160.77, stdev=18629.15 00:08:36.796 clat percentiles (usec): 00:08:36.796 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 258], 00:08:36.796 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 437], 00:08:36.796 | 70.00th=[ 766], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:08:36.796 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:36.796 | 99.99th=[42206] 00:08:36.796 bw ( KiB/s): min= 96, max= 1536, per=1.77%, avg=337.33, stdev=587.23, samples=6 00:08:36.796 iops : min= 24, max= 384, avg=84.33, stdev=146.81, samples=6 00:08:36.796 lat (usec) : 250=19.84%, 500=47.86%, 750=1.95%, 1000=0.78% 00:08:36.796 lat (msec) : 2=0.39%, 50=28.79% 00:08:36.796 cpu : usr=0.03%, sys=0.13%, ctx=259, majf=0, minf=1 00:08:36.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.796 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.796 issued rwts: total=257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.796 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=497147: Thu Jul 25 13:37:33 2024 00:08:36.796 read: IOPS=41, BW=163KiB/s (167kB/s)(472KiB/2893msec) 00:08:36.796 slat (nsec): min=6463, max=40524, avg=21380.91, stdev=11015.59 00:08:36.796 clat (usec): min=206, max=42001, avg=24300.26, stdev=20109.04 00:08:36.796 lat (usec): min=215, max=42018, avg=24321.65, stdev=20115.32 00:08:36.796 clat percentiles (usec): 00:08:36.796 | 1.00th=[ 225], 5.00th=[ 258], 10.00th=[ 314], 20.00th=[ 404], 00:08:36.796 | 30.00th=[ 445], 40.00th=[ 537], 50.00th=[41157], 60.00th=[41157], 00:08:36.796 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:08:36.796 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:36.796 | 99.99th=[42206] 00:08:36.796 bw ( KiB/s): min= 96, max= 456, per=0.90%, avg=172.80, stdev=158.35, samples=5 00:08:36.796 iops : min= 24, max= 114, avg=43.20, stdev=39.59, samples=5 00:08:36.796 lat (usec) : 250=3.36%, 500=31.93%, 750=5.04% 00:08:36.796 lat (msec) : 10=0.84%, 50=57.98% 00:08:36.796 cpu : usr=0.00%, sys=0.17%, ctx=120, majf=0, minf=1 00:08:36.796 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:36.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.796 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.796 issued rwts: total=119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:36.796 00:08:36.796 Run status group 0 (all jobs): 00:08:36.796 READ: bw=18.6MiB/s (19.5MB/s), 163KiB/s-15.5MiB/s (167kB/s-16.3MB/s), io=68.5MiB (71.8MB), run=2893-3687msec 00:08:36.796 00:08:36.796 Disk stats (read/write): 00:08:36.796 nvme0n1: ios=13450/0, merge=0/0, ticks=2950/0, in_queue=2950, util=94.31% 00:08:36.796 nvme0n2: ios=3716/0, merge=0/0, ticks=4201/0, in_queue=4201, util=98.31% 00:08:36.796 nvme0n3: ios=304/0, merge=0/0, ticks=4126/0, in_queue=4126, util=98.85% 00:08:36.796 nvme0n4: ios=164/0, merge=0/0, ticks=3060/0, in_queue=3060, util=99.08% 00:08:37.054 13:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:37.054 13:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:37.311 13:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:37.311 13:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:37.568 13:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:37.568 13:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:37.826 13:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:37.826 13:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:38.083 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:38.083 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 497046 00:08:38.083 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:38.083 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:38.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:38.341 nvmf hotplug test: fio failed as expected 00:08:38.341 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.599 rmmod nvme_tcp 00:08:38.599 rmmod nvme_fabrics 00:08:38.599 rmmod nvme_keyring 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 495115 ']' 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 495115 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 495115 ']' 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 495115 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 495115 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.599 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.600 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 495115' 00:08:38.600 killing process with pid 495115 00:08:38.600 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 495115 00:08:38.600 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 495115 00:08:38.858 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.858 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.858 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.858 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.858 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.858 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.858 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.858 13:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:41.405 00:08:41.405 real 0m23.305s 00:08:41.405 user 1m22.022s 00:08:41.405 sys 0m6.511s 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:41.405 ************************************ 00:08:41.405 END TEST nvmf_fio_target 00:08:41.405 ************************************ 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.405 ************************************ 00:08:41.405 START TEST nvmf_bdevio 00:08:41.405 ************************************ 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:41.405 * Looking for test storage... 00:08:41.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:08:41.405 13:37:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:43.309 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:43.309 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:43.309 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:43.309 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.309 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:43.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:08:43.310 00:08:43.310 --- 10.0.0.2 ping statistics --- 00:08:43.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.310 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:08:43.310 00:08:43.310 --- 10.0.0.1 ping statistics --- 00:08:43.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.310 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=499774 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 499774 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 499774 ']' 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.310 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.310 [2024-07-25 13:37:40.245575] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:43.310 [2024-07-25 13:37:40.245655] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.310 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.310 [2024-07-25 13:37:40.312504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.568 [2024-07-25 13:37:40.423725] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.568 [2024-07-25 13:37:40.423772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.568 [2024-07-25 13:37:40.423792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.568 [2024-07-25 13:37:40.423803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.568 [2024-07-25 13:37:40.423813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.568 [2024-07-25 13:37:40.423906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:43.568 [2024-07-25 13:37:40.423972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:43.568 [2024-07-25 13:37:40.424036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:43.568 [2024-07-25 13:37:40.424038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.568 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.568 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:08:43.568 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.569 [2024-07-25 13:37:40.582600] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.569 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.827 Malloc0 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:43.827 [2024-07-25 13:37:40.636298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.827 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:43.828 { 00:08:43.828 "params": { 00:08:43.828 "name": "Nvme$subsystem", 00:08:43.828 "trtype": "$TEST_TRANSPORT", 00:08:43.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:43.828 "adrfam": "ipv4", 00:08:43.828 "trsvcid": "$NVMF_PORT", 00:08:43.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:43.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:43.828 "hdgst": ${hdgst:-false}, 00:08:43.828 "ddgst": ${ddgst:-false} 00:08:43.828 }, 00:08:43.828 "method": "bdev_nvme_attach_controller" 00:08:43.828 } 00:08:43.828 EOF 00:08:43.828 )") 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:08:43.828 13:37:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:43.828 "params": { 00:08:43.828 "name": "Nvme1", 00:08:43.828 "trtype": "tcp", 00:08:43.828 "traddr": "10.0.0.2", 00:08:43.828 "adrfam": "ipv4", 00:08:43.828 "trsvcid": "4420", 00:08:43.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:43.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:43.828 "hdgst": false, 00:08:43.828 "ddgst": false 00:08:43.828 }, 00:08:43.828 "method": "bdev_nvme_attach_controller" 00:08:43.828 }' 00:08:43.828 [2024-07-25 13:37:40.684097] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:43.828 [2024-07-25 13:37:40.684166] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499919 ] 00:08:43.828 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.828 [2024-07-25 13:37:40.744600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.828 [2024-07-25 13:37:40.860093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.828 [2024-07-25 13:37:40.860134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.828 [2024-07-25 13:37:40.860138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.394 I/O targets: 00:08:44.394 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:44.394 00:08:44.394 00:08:44.394 CUnit - A unit testing framework for C - Version 2.1-3 00:08:44.394 http://cunit.sourceforge.net/ 00:08:44.394 00:08:44.394 00:08:44.394 Suite: bdevio tests on: Nvme1n1 00:08:44.394 Test: blockdev write read block ...passed 00:08:44.394 Test: blockdev write zeroes read block ...passed 00:08:44.395 Test: blockdev write zeroes read no split ...passed 00:08:44.395 Test: blockdev write zeroes read split ...passed 00:08:44.395 Test: blockdev write zeroes read split partial ...passed 00:08:44.395 Test: blockdev reset ...[2024-07-25 13:37:41.323226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:08:44.395 [2024-07-25 13:37:41.323348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ec580 (9): Bad file descriptor 00:08:44.395 [2024-07-25 13:37:41.338279] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.395 passed 00:08:44.395 Test: blockdev write read 8 blocks ...passed 00:08:44.395 Test: blockdev write read size > 128k ...passed 00:08:44.395 Test: blockdev write read invalid size ...passed 00:08:44.395 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:44.395 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:44.395 Test: blockdev write read max offset ...passed 00:08:44.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:44.652 Test: blockdev writev readv 8 blocks ...passed 00:08:44.652 Test: blockdev writev readv 30 x 1block ...passed 00:08:44.652 Test: blockdev writev readv block ...passed 00:08:44.652 Test: blockdev writev readv size > 128k ...passed 00:08:44.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:44.652 Test: blockdev comparev and writev ...[2024-07-25 13:37:41.552145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.652 [2024-07-25 13:37:41.552182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.552208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.652 [2024-07-25 13:37:41.552226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.552572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.652 [2024-07-25 13:37:41.552597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.552627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.652 [2024-07-25 13:37:41.552645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.552972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.652 [2024-07-25 13:37:41.552995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.553017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.652 [2024-07-25 13:37:41.553033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.553371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.652 [2024-07-25 13:37:41.553396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.553418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:44.652 [2024-07-25 13:37:41.553434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:44.652 passed 00:08:44.652 Test: blockdev nvme passthru rw ...passed 00:08:44.652 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:37:41.635311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:44.652 [2024-07-25 13:37:41.635348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.635490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:44.652 [2024-07-25 13:37:41.635512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.635651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:44.652 [2024-07-25 13:37:41.635675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:44.652 [2024-07-25 13:37:41.635808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:44.652 [2024-07-25 13:37:41.635831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:44.652 passed 00:08:44.652 Test: blockdev nvme admin passthru ...passed 00:08:44.911 Test: blockdev copy ...passed 00:08:44.911 00:08:44.911 Run Summary: Type Total Ran Passed Failed Inactive 00:08:44.911 suites 1 1 n/a 0 0 00:08:44.911 tests 23 23 23 0 0 00:08:44.911 asserts 152 152 152 0 n/a 00:08:44.911 00:08:44.911 Elapsed time = 0.977 seconds 00:08:44.911 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.911 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.911 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:45.170 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.170 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:45.170 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:45.170 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.170 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:08:45.170 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.170 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:08:45.170 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.170 13:37:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.170 rmmod nvme_tcp 00:08:45.170 rmmod nvme_fabrics 00:08:45.170 rmmod nvme_keyring 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 499774 ']' 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 499774 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 499774 ']' 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 499774 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 499774 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 499774' 00:08:45.170 killing process with pid 499774 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 499774 00:08:45.170 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 499774 00:08:45.428 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.428 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.428 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.428 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.428 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.428 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.428 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.429 13:37:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.964 00:08:47.964 real 0m6.500s 00:08:47.964 user 0m10.696s 00:08:47.964 sys 0m2.144s 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:47.964 ************************************ 00:08:47.964 END TEST nvmf_bdevio 00:08:47.964 ************************************ 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:47.964 00:08:47.964 real 3m51.515s 00:08:47.964 user 9m57.680s 00:08:47.964 sys 1m7.480s 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.964 ************************************ 00:08:47.964 END TEST nvmf_target_core 00:08:47.964 ************************************ 00:08:47.964 13:37:44 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:47.964 13:37:44 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.964 13:37:44 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.964 13:37:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.964 ************************************ 00:08:47.964 START TEST nvmf_target_extra 00:08:47.964 ************************************ 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:47.964 * Looking for test storage... 00:08:47.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:47.964 ************************************ 00:08:47.964 START TEST nvmf_example 00:08:47.964 ************************************ 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:47.964 * Looking for test storage... 00:08:47.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:47.964 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.965 13:37:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.885 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.885 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:08:49.886 00:08:49.886 --- 10.0.0.2 ping statistics --- 00:08:49.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.886 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:08:49.886 00:08:49.886 --- 10.0.0.1 ping statistics --- 00:08:49.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.886 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:49.886 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=502042 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 502042 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 502042 ']' 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.887 13:37:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.145 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.078 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:51.079 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.079 13:37:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:51.079 13:37:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:51.079 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.287 Initializing NVMe Controllers 00:09:03.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:03.287 Initialization complete. Launching workers. 00:09:03.287 ======================================================== 00:09:03.287 Latency(us) 00:09:03.287 Device Information : IOPS MiB/s Average min max 00:09:03.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15110.30 59.02 4235.51 858.27 16136.46 00:09:03.287 ======================================================== 00:09:03.287 Total : 15110.30 59.02 4235.51 858.27 16136.46 00:09:03.287 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.287 rmmod nvme_tcp 00:09:03.287 rmmod nvme_fabrics 00:09:03.287 rmmod nvme_keyring 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 502042 ']' 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 502042 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 502042 ']' 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 502042 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 502042 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 502042' 00:09:03.287 killing process with pid 502042 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 502042 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 502042 00:09:03.287 nvmf threads initialize successfully 00:09:03.287 bdev subsystem init successfully 00:09:03.287 created a nvmf target service 00:09:03.287 create targets's poll groups done 00:09:03.287 all subsystems of target started 00:09:03.287 nvmf target is running 00:09:03.287 all subsystems of target stopped 00:09:03.287 destroy targets's poll groups done 00:09:03.287 destroyed the nvmf target service 00:09:03.287 bdev subsystem finish successfully 00:09:03.287 nvmf threads destroy successfully 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.287 13:37:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.857 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:03.857 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:03.857 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.857 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.857 00:09:03.857 real 0m16.097s 00:09:03.857 user 0m45.491s 00:09:03.857 sys 0m3.373s 00:09:03.857 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.857 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.857 ************************************ 00:09:03.857 END TEST nvmf_example 00:09:03.857 ************************************ 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:03.858 ************************************ 00:09:03.858 START TEST nvmf_filesystem 00:09:03.858 ************************************ 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:03.858 * Looking for test storage... 00:09:03.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:03.858 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:03.859 #define SPDK_CONFIG_H 00:09:03.859 #define SPDK_CONFIG_APPS 1 00:09:03.859 #define SPDK_CONFIG_ARCH native 00:09:03.859 #undef SPDK_CONFIG_ASAN 00:09:03.859 #undef SPDK_CONFIG_AVAHI 00:09:03.859 #undef SPDK_CONFIG_CET 00:09:03.859 #define SPDK_CONFIG_COVERAGE 1 00:09:03.859 #define SPDK_CONFIG_CROSS_PREFIX 00:09:03.859 #undef SPDK_CONFIG_CRYPTO 00:09:03.859 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:03.859 #undef SPDK_CONFIG_CUSTOMOCF 00:09:03.859 #undef SPDK_CONFIG_DAOS 00:09:03.859 #define SPDK_CONFIG_DAOS_DIR 00:09:03.859 #define SPDK_CONFIG_DEBUG 1 00:09:03.859 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:03.859 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:03.859 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:03.859 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:03.859 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:03.859 #undef SPDK_CONFIG_DPDK_UADK 00:09:03.859 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:03.859 #define SPDK_CONFIG_EXAMPLES 1 00:09:03.859 #undef SPDK_CONFIG_FC 00:09:03.859 #define SPDK_CONFIG_FC_PATH 00:09:03.859 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:03.859 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:03.859 #undef SPDK_CONFIG_FUSE 00:09:03.859 #undef SPDK_CONFIG_FUZZER 00:09:03.859 #define SPDK_CONFIG_FUZZER_LIB 00:09:03.859 #undef SPDK_CONFIG_GOLANG 00:09:03.859 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:03.859 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:03.859 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:03.859 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:03.859 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:03.859 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:03.859 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:03.859 #define SPDK_CONFIG_IDXD 1 00:09:03.859 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:03.859 #undef SPDK_CONFIG_IPSEC_MB 00:09:03.859 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:03.859 #define SPDK_CONFIG_ISAL 1 00:09:03.859 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:03.859 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:03.859 #define SPDK_CONFIG_LIBDIR 00:09:03.859 #undef SPDK_CONFIG_LTO 00:09:03.859 #define SPDK_CONFIG_MAX_LCORES 128 00:09:03.859 #define SPDK_CONFIG_NVME_CUSE 1 00:09:03.859 #undef SPDK_CONFIG_OCF 00:09:03.859 #define SPDK_CONFIG_OCF_PATH 00:09:03.859 #define SPDK_CONFIG_OPENSSL_PATH 00:09:03.859 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:03.859 #define SPDK_CONFIG_PGO_DIR 00:09:03.859 #undef SPDK_CONFIG_PGO_USE 00:09:03.859 #define SPDK_CONFIG_PREFIX /usr/local 00:09:03.859 #undef SPDK_CONFIG_RAID5F 00:09:03.859 #undef SPDK_CONFIG_RBD 00:09:03.859 #define SPDK_CONFIG_RDMA 1 00:09:03.859 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:03.859 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:03.859 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:03.859 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:03.859 #define SPDK_CONFIG_SHARED 1 00:09:03.859 #undef SPDK_CONFIG_SMA 00:09:03.859 #define SPDK_CONFIG_TESTS 1 00:09:03.859 #undef SPDK_CONFIG_TSAN 00:09:03.859 #define SPDK_CONFIG_UBLK 1 00:09:03.859 #define SPDK_CONFIG_UBSAN 1 00:09:03.859 #undef SPDK_CONFIG_UNIT_TESTS 00:09:03.859 #undef SPDK_CONFIG_URING 00:09:03.859 #define SPDK_CONFIG_URING_PATH 00:09:03.859 #undef SPDK_CONFIG_URING_ZNS 00:09:03.859 #undef SPDK_CONFIG_USDT 00:09:03.859 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:03.859 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:03.859 #define SPDK_CONFIG_VFIO_USER 1 00:09:03.859 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:03.859 #define SPDK_CONFIG_VHOST 1 00:09:03.859 #define SPDK_CONFIG_VIRTIO 1 00:09:03.859 #undef SPDK_CONFIG_VTUNE 00:09:03.859 #define SPDK_CONFIG_VTUNE_DIR 00:09:03.859 #define SPDK_CONFIG_WERROR 1 00:09:03.859 #define SPDK_CONFIG_WPDK_DIR 00:09:03.859 #undef SPDK_CONFIG_XNVME 00:09:03.859 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:03.859 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:03.860 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.861 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 503805 ]] 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 503805 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.JJ7Yyh 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.JJ7Yyh/tests/target /tmp/spdk.JJ7Yyh 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=56452427776 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5542285312 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30987436032 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376535040 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:09:03.862 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30997020672 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=335872 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:03.863 * Looking for test storage... 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=56452427776 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=7756877824 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.863 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.864 13:38:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:06.399 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:06.399 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:06.399 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:06.399 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.399 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.400 13:38:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:09:06.400 00:09:06.400 --- 10.0.0.2 ping statistics --- 00:09:06.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.400 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:09:06.400 00:09:06.400 --- 10.0.0.1 ping statistics --- 00:09:06.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.400 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.400 ************************************ 00:09:06.400 START TEST nvmf_filesystem_no_in_capsule 00:09:06.400 ************************************ 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=505486 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 505486 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 505486 ']' 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.400 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.400 [2024-07-25 13:38:03.191471] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:06.400 [2024-07-25 13:38:03.191560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.400 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.400 [2024-07-25 13:38:03.253299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.400 [2024-07-25 13:38:03.355259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.400 [2024-07-25 13:38:03.355312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.400 [2024-07-25 13:38:03.355333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.400 [2024-07-25 13:38:03.355350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.400 [2024-07-25 13:38:03.355360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.400 [2024-07-25 13:38:03.355439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.400 [2024-07-25 13:38:03.355503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.400 [2024-07-25 13:38:03.355568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.400 [2024-07-25 13:38:03.355571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.691 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.691 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.692 [2024-07-25 13:38:03.511613] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.692 Malloc1 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.692 [2024-07-25 13:38:03.696134] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.692 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:06.953 { 00:09:06.953 "name": "Malloc1", 00:09:06.953 "aliases": [ 00:09:06.953 "054203c3-f87c-416b-b8ab-684ac20dc19f" 00:09:06.953 ], 00:09:06.953 "product_name": "Malloc disk", 00:09:06.953 "block_size": 512, 00:09:06.953 "num_blocks": 1048576, 00:09:06.953 "uuid": "054203c3-f87c-416b-b8ab-684ac20dc19f", 00:09:06.953 "assigned_rate_limits": { 00:09:06.953 "rw_ios_per_sec": 0, 00:09:06.953 "rw_mbytes_per_sec": 0, 00:09:06.953 "r_mbytes_per_sec": 0, 00:09:06.953 "w_mbytes_per_sec": 0 00:09:06.953 }, 00:09:06.953 "claimed": true, 00:09:06.953 "claim_type": "exclusive_write", 00:09:06.953 "zoned": false, 00:09:06.953 "supported_io_types": { 00:09:06.953 "read": true, 00:09:06.953 "write": true, 00:09:06.953 "unmap": true, 00:09:06.953 "flush": true, 00:09:06.953 "reset": true, 00:09:06.953 "nvme_admin": false, 00:09:06.953 "nvme_io": false, 00:09:06.953 "nvme_io_md": false, 00:09:06.953 "write_zeroes": true, 00:09:06.953 "zcopy": true, 00:09:06.953 "get_zone_info": false, 00:09:06.953 "zone_management": false, 00:09:06.953 "zone_append": false, 00:09:06.953 "compare": false, 00:09:06.953 "compare_and_write": false, 00:09:06.953 "abort": true, 00:09:06.953 "seek_hole": false, 00:09:06.953 "seek_data": false, 00:09:06.953 "copy": true, 00:09:06.953 "nvme_iov_md": false 00:09:06.953 }, 00:09:06.953 "memory_domains": [ 00:09:06.953 { 00:09:06.953 "dma_device_id": "system", 00:09:06.953 "dma_device_type": 1 00:09:06.953 }, 00:09:06.953 { 00:09:06.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.953 "dma_device_type": 2 00:09:06.953 } 00:09:06.953 ], 00:09:06.953 "driver_specific": {} 00:09:06.953 } 00:09:06.953 ]' 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:06.953 13:38:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.522 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.522 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:07.522 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.522 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:07.522 13:38:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:10.057 13:38:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:10.994 13:38:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:11.934 ************************************ 00:09:11.934 START TEST filesystem_ext4 00:09:11.934 ************************************ 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:11.934 13:38:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:11.934 mke2fs 1.46.5 (30-Dec-2021) 00:09:11.934 Discarding device blocks: 0/522240 done 00:09:11.934 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:11.934 Filesystem UUID: d426e41a-ac43-442d-b606-c6f09cee7ba5 00:09:11.934 Superblock backups stored on blocks: 00:09:11.934 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:11.934 00:09:11.934 Allocating group tables: 0/64 done 00:09:11.934 Writing inode tables: 0/64 done 00:09:12.191 Creating journal (8192 blocks): done 00:09:12.758 Writing superblocks and filesystem accounting information: 0/64 done 00:09:12.758 00:09:12.758 13:38:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:12.758 13:38:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.327 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 505486 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.328 00:09:13.328 real 0m1.344s 00:09:13.328 user 0m0.027s 00:09:13.328 sys 0m0.051s 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:13.328 ************************************ 00:09:13.328 END TEST filesystem_ext4 00:09:13.328 ************************************ 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.328 ************************************ 00:09:13.328 START TEST filesystem_btrfs 00:09:13.328 ************************************ 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:13.328 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:13.586 btrfs-progs v6.6.2 00:09:13.586 See https://btrfs.readthedocs.io for more information. 00:09:13.586 00:09:13.586 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:13.586 NOTE: several default settings have changed in version 5.15, please make sure 00:09:13.586 this does not affect your deployments: 00:09:13.586 - DUP for metadata (-m dup) 00:09:13.586 - enabled no-holes (-O no-holes) 00:09:13.586 - enabled free-space-tree (-R free-space-tree) 00:09:13.586 00:09:13.586 Label: (null) 00:09:13.586 UUID: 6d6372b5-7547-4524-8046-07a59318af09 00:09:13.586 Node size: 16384 00:09:13.586 Sector size: 4096 00:09:13.586 Filesystem size: 510.00MiB 00:09:13.586 Block group profiles: 00:09:13.586 Data: single 8.00MiB 00:09:13.586 Metadata: DUP 32.00MiB 00:09:13.586 System: DUP 8.00MiB 00:09:13.586 SSD detected: yes 00:09:13.586 Zoned device: no 00:09:13.586 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:13.586 Runtime features: free-space-tree 00:09:13.586 Checksum: crc32c 00:09:13.586 Number of devices: 1 00:09:13.586 Devices: 00:09:13.586 ID SIZE PATH 00:09:13.586 1 510.00MiB /dev/nvme0n1p1 00:09:13.586 00:09:13.586 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:13.586 13:38:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:14.153 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:14.153 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:14.153 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 505486 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:14.413 00:09:14.413 real 0m0.994s 00:09:14.413 user 0m0.024s 00:09:14.413 sys 0m0.110s 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:14.413 ************************************ 00:09:14.413 END TEST filesystem_btrfs 00:09:14.413 ************************************ 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.413 ************************************ 00:09:14.413 START TEST filesystem_xfs 00:09:14.413 ************************************ 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:14.413 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:14.414 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:14.414 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:09:14.414 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:14.414 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:14.414 13:38:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:14.414 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:14.414 = sectsz=512 attr=2, projid32bit=1 00:09:14.414 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:14.414 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:14.414 data = bsize=4096 blocks=130560, imaxpct=25 00:09:14.414 = sunit=0 swidth=0 blks 00:09:14.414 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:14.414 log =internal log bsize=4096 blocks=16384, version=2 00:09:14.414 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:14.414 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:15.349 Discarding blocks...Done. 00:09:15.350 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:15.350 13:38:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 505486 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:17.890 00:09:17.890 real 0m3.369s 00:09:17.890 user 0m0.008s 00:09:17.890 sys 0m0.067s 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:17.890 ************************************ 00:09:17.890 END TEST filesystem_xfs 00:09:17.890 ************************************ 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 505486 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 505486 ']' 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 505486 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 505486 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 505486' 00:09:17.890 killing process with pid 505486 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 505486 00:09:17.890 13:38:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 505486 00:09:18.456 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:18.456 00:09:18.456 real 0m12.168s 00:09:18.456 user 0m46.590s 00:09:18.456 sys 0m1.802s 00:09:18.456 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.456 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.456 ************************************ 00:09:18.456 END TEST nvmf_filesystem_no_in_capsule 00:09:18.456 ************************************ 00:09:18.456 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:18.456 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.457 ************************************ 00:09:18.457 START TEST nvmf_filesystem_in_capsule 00:09:18.457 ************************************ 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=507053 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 507053 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 507053 ']' 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.457 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.457 [2024-07-25 13:38:15.415130] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:18.457 [2024-07-25 13:38:15.415209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.457 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.457 [2024-07-25 13:38:15.476402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.714 [2024-07-25 13:38:15.582146] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.714 [2024-07-25 13:38:15.582204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.714 [2024-07-25 13:38:15.582232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.714 [2024-07-25 13:38:15.582243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.714 [2024-07-25 13:38:15.582253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.714 [2024-07-25 13:38:15.582314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.714 [2024-07-25 13:38:15.582384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.714 [2024-07-25 13:38:15.582442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.714 [2024-07-25 13:38:15.582444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.714 [2024-07-25 13:38:15.737636] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.714 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.972 Malloc1 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.972 [2024-07-25 13:38:15.905979] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.972 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.973 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:18.973 { 00:09:18.973 "name": "Malloc1", 00:09:18.973 "aliases": [ 00:09:18.973 "da757f9c-050b-44e1-9487-83cdf5d11f5a" 00:09:18.973 ], 00:09:18.973 "product_name": "Malloc disk", 00:09:18.973 "block_size": 512, 00:09:18.973 "num_blocks": 1048576, 00:09:18.973 "uuid": "da757f9c-050b-44e1-9487-83cdf5d11f5a", 00:09:18.973 "assigned_rate_limits": { 00:09:18.973 "rw_ios_per_sec": 0, 00:09:18.973 "rw_mbytes_per_sec": 0, 00:09:18.973 "r_mbytes_per_sec": 0, 00:09:18.973 "w_mbytes_per_sec": 0 00:09:18.973 }, 00:09:18.973 "claimed": true, 00:09:18.973 "claim_type": "exclusive_write", 00:09:18.973 "zoned": false, 00:09:18.973 "supported_io_types": { 00:09:18.973 "read": true, 00:09:18.973 "write": true, 00:09:18.973 "unmap": true, 00:09:18.973 "flush": true, 00:09:18.973 "reset": true, 00:09:18.973 "nvme_admin": false, 00:09:18.973 "nvme_io": false, 00:09:18.973 "nvme_io_md": false, 00:09:18.973 "write_zeroes": true, 00:09:18.973 "zcopy": true, 00:09:18.973 "get_zone_info": false, 00:09:18.973 "zone_management": false, 00:09:18.973 "zone_append": false, 00:09:18.973 "compare": false, 00:09:18.973 "compare_and_write": false, 00:09:18.973 "abort": true, 00:09:18.973 "seek_hole": false, 00:09:18.973 "seek_data": false, 00:09:18.973 "copy": true, 00:09:18.973 "nvme_iov_md": false 00:09:18.973 }, 00:09:18.973 "memory_domains": [ 00:09:18.973 { 00:09:18.973 "dma_device_id": "system", 00:09:18.973 "dma_device_type": 1 00:09:18.973 }, 00:09:18.973 { 00:09:18.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.973 "dma_device_type": 2 00:09:18.973 } 00:09:18.973 ], 00:09:18.973 "driver_specific": {} 00:09:18.973 } 00:09:18.973 ]' 00:09:18.973 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:18.973 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:18.973 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:18.973 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:18.973 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:18.973 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:18.973 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:18.973 13:38:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.909 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.909 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:19.909 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.909 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:19.909 13:38:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:21.815 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:22.075 13:38:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:22.643 13:38:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.020 ************************************ 00:09:24.020 START TEST filesystem_in_capsule_ext4 00:09:24.020 ************************************ 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:24.020 13:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:24.020 mke2fs 1.46.5 (30-Dec-2021) 00:09:24.020 Discarding device blocks: 0/522240 done 00:09:24.020 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:24.020 Filesystem UUID: 7bb04ec9-c9e2-4502-8b41-65f820cdaf6c 00:09:24.020 Superblock backups stored on blocks: 00:09:24.020 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:24.020 00:09:24.020 Allocating group tables: 0/64 done 00:09:24.020 Writing inode tables: 0/64 done 00:09:24.020 Creating journal (8192 blocks): done 00:09:24.956 Writing superblocks and filesystem accounting information: 0/64 done 00:09:24.956 00:09:24.956 13:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:24.956 13:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:25.524 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:25.524 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:25.524 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:25.524 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:25.524 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:25.524 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:25.524 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 507053 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:25.525 00:09:25.525 real 0m1.758s 00:09:25.525 user 0m0.018s 00:09:25.525 sys 0m0.050s 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:25.525 ************************************ 00:09:25.525 END TEST filesystem_in_capsule_ext4 00:09:25.525 ************************************ 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:25.525 ************************************ 00:09:25.525 START TEST filesystem_in_capsule_btrfs 00:09:25.525 ************************************ 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:25.525 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:25.784 btrfs-progs v6.6.2 00:09:25.784 See https://btrfs.readthedocs.io for more information. 00:09:25.784 00:09:25.784 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:25.784 NOTE: several default settings have changed in version 5.15, please make sure 00:09:25.784 this does not affect your deployments: 00:09:25.784 - DUP for metadata (-m dup) 00:09:25.784 - enabled no-holes (-O no-holes) 00:09:25.784 - enabled free-space-tree (-R free-space-tree) 00:09:25.784 00:09:25.784 Label: (null) 00:09:25.784 UUID: c232fbbc-488a-4b7b-9154-ef38a3c768e1 00:09:25.784 Node size: 16384 00:09:25.784 Sector size: 4096 00:09:25.784 Filesystem size: 510.00MiB 00:09:25.784 Block group profiles: 00:09:25.784 Data: single 8.00MiB 00:09:25.784 Metadata: DUP 32.00MiB 00:09:25.784 System: DUP 8.00MiB 00:09:25.784 SSD detected: yes 00:09:25.784 Zoned device: no 00:09:25.784 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:25.784 Runtime features: free-space-tree 00:09:25.784 Checksum: crc32c 00:09:25.784 Number of devices: 1 00:09:25.784 Devices: 00:09:25.784 ID SIZE PATH 00:09:25.784 1 510.00MiB /dev/nvme0n1p1 00:09:25.784 00:09:25.784 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:25.784 13:38:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:26.353 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:26.353 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:26.353 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:26.353 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:26.353 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:26.353 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 507053 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:26.354 00:09:26.354 real 0m0.767s 00:09:26.354 user 0m0.019s 00:09:26.354 sys 0m0.105s 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:26.354 ************************************ 00:09:26.354 END TEST filesystem_in_capsule_btrfs 00:09:26.354 ************************************ 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:26.354 ************************************ 00:09:26.354 START TEST filesystem_in_capsule_xfs 00:09:26.354 ************************************ 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:26.354 13:38:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:26.612 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:26.612 = sectsz=512 attr=2, projid32bit=1 00:09:26.612 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:26.612 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:26.612 data = bsize=4096 blocks=130560, imaxpct=25 00:09:26.612 = sunit=0 swidth=0 blks 00:09:26.612 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:26.612 log =internal log bsize=4096 blocks=16384, version=2 00:09:26.612 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:26.612 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:27.179 Discarding blocks...Done. 00:09:27.179 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:27.179 13:38:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:29.087 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:29.087 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:29.087 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:29.087 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:29.087 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:29.087 13:38:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:29.087 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 507053 00:09:29.087 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:29.087 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:29.087 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:29.087 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:29.087 00:09:29.087 real 0m2.733s 00:09:29.087 user 0m0.025s 00:09:29.087 sys 0m0.051s 00:09:29.087 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.087 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:29.087 ************************************ 00:09:29.087 END TEST filesystem_in_capsule_xfs 00:09:29.087 ************************************ 00:09:29.087 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:29.345 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:29.345 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.345 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.345 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:29.345 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:29.345 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.345 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 507053 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 507053 ']' 00:09:29.346 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 507053 00:09:29.604 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:29.604 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.604 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 507053 00:09:29.604 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.604 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.604 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 507053' 00:09:29.604 killing process with pid 507053 00:09:29.604 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 507053 00:09:29.604 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 507053 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:29.863 00:09:29.863 real 0m11.501s 00:09:29.863 user 0m44.038s 00:09:29.863 sys 0m1.711s 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.863 ************************************ 00:09:29.863 END TEST nvmf_filesystem_in_capsule 00:09:29.863 ************************************ 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:29.863 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:29.863 rmmod nvme_tcp 00:09:30.123 rmmod nvme_fabrics 00:09:30.123 rmmod nvme_keyring 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.123 13:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.048 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:32.048 00:09:32.048 real 0m28.293s 00:09:32.048 user 1m31.572s 00:09:32.048 sys 0m5.181s 00:09:32.048 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.048 13:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:32.048 ************************************ 00:09:32.048 END TEST nvmf_filesystem 00:09:32.048 ************************************ 00:09:32.048 13:38:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:32.048 13:38:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:32.048 13:38:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.048 13:38:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:32.048 ************************************ 00:09:32.048 START TEST nvmf_target_discovery 00:09:32.048 ************************************ 00:09:32.048 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:32.048 * Looking for test storage... 00:09:32.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.048 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.307 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:32.308 13:38:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:34.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:34.239 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:34.239 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:34.240 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:34.240 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.240 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:34.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:09:34.498 00:09:34.498 --- 10.0.0.2 ping statistics --- 00:09:34.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.498 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:09:34.498 00:09:34.498 --- 10.0.0.1 ping statistics --- 00:09:34.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.498 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=510514 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 510514 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 510514 ']' 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.498 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.498 [2024-07-25 13:38:31.409815] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:34.498 [2024-07-25 13:38:31.409895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.498 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.498 [2024-07-25 13:38:31.476018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.756 [2024-07-25 13:38:31.590427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.756 [2024-07-25 13:38:31.590495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.756 [2024-07-25 13:38:31.590522] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.756 [2024-07-25 13:38:31.590533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.756 [2024-07-25 13:38:31.590544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.757 [2024-07-25 13:38:31.590610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.757 [2024-07-25 13:38:31.590677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.757 [2024-07-25 13:38:31.590791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.757 [2024-07-25 13:38:31.590787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.757 [2024-07-25 13:38:31.737288] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.757 Null1 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.757 [2024-07-25 13:38:31.777600] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:34.757 Null2 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.757 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 Null3 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 Null4 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.017 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:35.018 00:09:35.018 Discovery Log Number of Records 6, Generation counter 6 00:09:35.018 =====Discovery Log Entry 0====== 00:09:35.018 trtype: tcp 00:09:35.018 adrfam: ipv4 00:09:35.018 subtype: current discovery subsystem 00:09:35.018 treq: not required 00:09:35.018 portid: 0 00:09:35.018 trsvcid: 4420 00:09:35.018 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:35.018 traddr: 10.0.0.2 00:09:35.018 eflags: explicit discovery connections, duplicate discovery information 00:09:35.018 sectype: none 00:09:35.018 =====Discovery Log Entry 1====== 00:09:35.018 trtype: tcp 00:09:35.018 adrfam: ipv4 00:09:35.018 subtype: nvme subsystem 00:09:35.018 treq: not required 00:09:35.018 portid: 0 00:09:35.018 trsvcid: 4420 00:09:35.018 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:35.018 traddr: 10.0.0.2 00:09:35.018 eflags: none 00:09:35.018 sectype: none 00:09:35.018 =====Discovery Log Entry 2====== 00:09:35.018 trtype: tcp 00:09:35.018 adrfam: ipv4 00:09:35.018 subtype: nvme subsystem 00:09:35.018 treq: not required 00:09:35.018 portid: 0 00:09:35.018 trsvcid: 4420 00:09:35.018 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:35.018 traddr: 10.0.0.2 00:09:35.018 eflags: none 00:09:35.018 sectype: none 00:09:35.018 =====Discovery Log Entry 3====== 00:09:35.018 trtype: tcp 00:09:35.018 adrfam: ipv4 00:09:35.018 subtype: nvme subsystem 00:09:35.018 treq: not required 00:09:35.018 portid: 0 00:09:35.018 trsvcid: 4420 00:09:35.018 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:35.018 traddr: 10.0.0.2 00:09:35.018 eflags: none 00:09:35.018 sectype: none 00:09:35.018 =====Discovery Log Entry 4====== 00:09:35.018 trtype: tcp 00:09:35.018 adrfam: ipv4 00:09:35.018 subtype: nvme subsystem 00:09:35.018 treq: not required 00:09:35.018 portid: 0 00:09:35.018 trsvcid: 4420 00:09:35.018 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:35.018 traddr: 10.0.0.2 00:09:35.018 eflags: none 00:09:35.018 sectype: none 00:09:35.018 =====Discovery Log Entry 5====== 00:09:35.018 trtype: tcp 00:09:35.018 adrfam: ipv4 00:09:35.018 subtype: discovery subsystem referral 00:09:35.018 treq: not required 00:09:35.018 portid: 0 00:09:35.018 trsvcid: 4430 00:09:35.018 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:35.018 traddr: 10.0.0.2 00:09:35.018 eflags: none 00:09:35.018 sectype: none 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:35.018 Perform nvmf subsystem discovery via RPC 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.018 [ 00:09:35.018 { 00:09:35.018 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:35.018 "subtype": "Discovery", 00:09:35.018 "listen_addresses": [ 00:09:35.018 { 00:09:35.018 "trtype": "TCP", 00:09:35.018 "adrfam": "IPv4", 00:09:35.018 "traddr": "10.0.0.2", 00:09:35.018 "trsvcid": "4420" 00:09:35.018 } 00:09:35.018 ], 00:09:35.018 "allow_any_host": true, 00:09:35.018 "hosts": [] 00:09:35.018 }, 00:09:35.018 { 00:09:35.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.018 "subtype": "NVMe", 00:09:35.018 "listen_addresses": [ 00:09:35.018 { 00:09:35.018 "trtype": "TCP", 00:09:35.018 "adrfam": "IPv4", 00:09:35.018 "traddr": "10.0.0.2", 00:09:35.018 "trsvcid": "4420" 00:09:35.018 } 00:09:35.018 ], 00:09:35.018 "allow_any_host": true, 00:09:35.018 "hosts": [], 00:09:35.018 "serial_number": "SPDK00000000000001", 00:09:35.018 "model_number": "SPDK bdev Controller", 00:09:35.018 "max_namespaces": 32, 00:09:35.018 "min_cntlid": 1, 00:09:35.018 "max_cntlid": 65519, 00:09:35.018 "namespaces": [ 00:09:35.018 { 00:09:35.018 "nsid": 1, 00:09:35.018 "bdev_name": "Null1", 00:09:35.018 "name": "Null1", 00:09:35.018 "nguid": "18F32846157A4EA792D57D6F3EDECC32", 00:09:35.018 "uuid": "18f32846-157a-4ea7-92d5-7d6f3edecc32" 00:09:35.018 } 00:09:35.018 ] 00:09:35.018 }, 00:09:35.018 { 00:09:35.018 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:35.018 "subtype": "NVMe", 00:09:35.018 "listen_addresses": [ 00:09:35.018 { 00:09:35.018 "trtype": "TCP", 00:09:35.018 "adrfam": "IPv4", 00:09:35.018 "traddr": "10.0.0.2", 00:09:35.018 "trsvcid": "4420" 00:09:35.018 } 00:09:35.018 ], 00:09:35.018 "allow_any_host": true, 00:09:35.018 "hosts": [], 00:09:35.018 "serial_number": "SPDK00000000000002", 00:09:35.018 "model_number": "SPDK bdev Controller", 00:09:35.018 "max_namespaces": 32, 00:09:35.018 "min_cntlid": 1, 00:09:35.018 "max_cntlid": 65519, 00:09:35.018 "namespaces": [ 00:09:35.018 { 00:09:35.018 "nsid": 1, 00:09:35.018 "bdev_name": "Null2", 00:09:35.018 "name": "Null2", 00:09:35.018 "nguid": "C8DDC958E674473F8DC5FE93A7B602AD", 00:09:35.018 "uuid": "c8ddc958-e674-473f-8dc5-fe93a7b602ad" 00:09:35.018 } 00:09:35.018 ] 00:09:35.018 }, 00:09:35.018 { 00:09:35.018 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:35.018 "subtype": "NVMe", 00:09:35.018 "listen_addresses": [ 00:09:35.018 { 00:09:35.018 "trtype": "TCP", 00:09:35.018 "adrfam": "IPv4", 00:09:35.018 "traddr": "10.0.0.2", 00:09:35.018 "trsvcid": "4420" 00:09:35.018 } 00:09:35.018 ], 00:09:35.018 "allow_any_host": true, 00:09:35.018 "hosts": [], 00:09:35.018 "serial_number": "SPDK00000000000003", 00:09:35.018 "model_number": "SPDK bdev Controller", 00:09:35.018 "max_namespaces": 32, 00:09:35.018 "min_cntlid": 1, 00:09:35.018 "max_cntlid": 65519, 00:09:35.018 "namespaces": [ 00:09:35.018 { 00:09:35.018 "nsid": 1, 00:09:35.018 "bdev_name": "Null3", 00:09:35.018 "name": "Null3", 00:09:35.018 "nguid": "6DC8A79E066B4D5EA2957706AA7EE2EC", 00:09:35.018 "uuid": "6dc8a79e-066b-4d5e-a295-7706aa7ee2ec" 00:09:35.018 } 00:09:35.018 ] 00:09:35.018 }, 00:09:35.018 { 00:09:35.018 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:35.018 "subtype": "NVMe", 00:09:35.018 "listen_addresses": [ 00:09:35.018 { 00:09:35.018 "trtype": "TCP", 00:09:35.018 "adrfam": "IPv4", 00:09:35.018 "traddr": "10.0.0.2", 00:09:35.018 "trsvcid": "4420" 00:09:35.018 } 00:09:35.018 ], 00:09:35.018 "allow_any_host": true, 00:09:35.018 "hosts": [], 00:09:35.018 "serial_number": "SPDK00000000000004", 00:09:35.018 "model_number": "SPDK bdev Controller", 00:09:35.018 "max_namespaces": 32, 00:09:35.018 "min_cntlid": 1, 00:09:35.018 "max_cntlid": 65519, 00:09:35.018 "namespaces": [ 00:09:35.018 { 00:09:35.018 "nsid": 1, 00:09:35.018 "bdev_name": "Null4", 00:09:35.018 "name": "Null4", 00:09:35.018 "nguid": "C97708DF9C2A4294B3EB3334F571F5CD", 00:09:35.018 "uuid": "c97708df-9c2a-4294-b3eb-3334f571f5cd" 00:09:35.018 } 00:09:35.018 ] 00:09:35.018 } 00:09:35.018 ] 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.018 13:38:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.018 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.018 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.019 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:35.277 rmmod nvme_tcp 00:09:35.277 rmmod nvme_fabrics 00:09:35.277 rmmod nvme_keyring 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:35.277 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 510514 ']' 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 510514 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 510514 ']' 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 510514 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 510514 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 510514' 00:09:35.278 killing process with pid 510514 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 510514 00:09:35.278 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 510514 00:09:35.536 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.536 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:35.536 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:35.536 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:35.536 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:35.536 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.536 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.536 13:38:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.072 00:09:38.072 real 0m5.459s 00:09:38.072 user 0m4.195s 00:09:38.072 sys 0m1.862s 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.072 ************************************ 00:09:38.072 END TEST nvmf_target_discovery 00:09:38.072 ************************************ 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:38.072 ************************************ 00:09:38.072 START TEST nvmf_referrals 00:09:38.072 ************************************ 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:38.072 * Looking for test storage... 00:09:38.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.072 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.073 13:38:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:39.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:39.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:39.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:39.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.978 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:39.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:09:39.979 00:09:39.979 --- 10.0.0.2 ping statistics --- 00:09:39.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.979 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:09:39.979 00:09:39.979 --- 10.0.0.1 ping statistics --- 00:09:39.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.979 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=512601 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 512601 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 512601 ']' 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.979 13:38:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:39.979 [2024-07-25 13:38:36.870456] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.979 [2024-07-25 13:38:36.870547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.979 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.979 [2024-07-25 13:38:36.934627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.237 [2024-07-25 13:38:37.044595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.237 [2024-07-25 13:38:37.044646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.237 [2024-07-25 13:38:37.044669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.237 [2024-07-25 13:38:37.044679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.237 [2024-07-25 13:38:37.044688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.237 [2024-07-25 13:38:37.044767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.237 [2024-07-25 13:38:37.044831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.237 [2024-07-25 13:38:37.044950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.237 [2024-07-25 13:38:37.044944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.237 [2024-07-25 13:38:37.189226] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.237 [2024-07-25 13:38:37.201480] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.237 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.238 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.495 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.496 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.496 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.496 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:40.496 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.496 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.753 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:40.754 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.754 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:40.754 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:40.754 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:40.754 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:40.754 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:40.754 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:40.754 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:40.754 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.013 13:38:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.271 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.529 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.788 rmmod nvme_tcp 00:09:41.788 rmmod nvme_fabrics 00:09:41.788 rmmod nvme_keyring 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 512601 ']' 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 512601 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 512601 ']' 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 512601 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 512601 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 512601' 00:09:41.788 killing process with pid 512601 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 512601 00:09:41.788 13:38:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 512601 00:09:42.047 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.047 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.047 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.047 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.047 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.047 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.047 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.047 13:38:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.582 00:09:44.582 real 0m6.572s 00:09:44.582 user 0m9.351s 00:09:44.582 sys 0m2.120s 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:44.582 ************************************ 00:09:44.582 END TEST nvmf_referrals 00:09:44.582 ************************************ 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:44.582 ************************************ 00:09:44.582 START TEST nvmf_connect_disconnect 00:09:44.582 ************************************ 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:44.582 * Looking for test storage... 00:09:44.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.582 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.583 13:38:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:46.486 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:46.486 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:46.486 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.486 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:46.487 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:09:46.487 00:09:46.487 --- 10.0.0.2 ping statistics --- 00:09:46.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.487 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:09:46.487 00:09:46.487 --- 10.0.0.1 ping statistics --- 00:09:46.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.487 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=514886 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 514886 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 514886 ']' 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.487 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:46.747 [2024-07-25 13:38:43.527326] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:46.747 [2024-07-25 13:38:43.527443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.747 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.747 [2024-07-25 13:38:43.594620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.747 [2024-07-25 13:38:43.711162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.747 [2024-07-25 13:38:43.711215] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.747 [2024-07-25 13:38:43.711229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.747 [2024-07-25 13:38:43.711241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.747 [2024-07-25 13:38:43.711251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.747 [2024-07-25 13:38:43.711589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.747 [2024-07-25 13:38:43.711651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.747 [2024-07-25 13:38:43.711718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.747 [2024-07-25 13:38:43.711721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.005 [2024-07-25 13:38:43.869436] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.005 [2024-07-25 13:38:43.921942] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:47.005 13:38:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:50.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:00.718 rmmod nvme_tcp 00:10:00.718 rmmod nvme_fabrics 00:10:00.718 rmmod nvme_keyring 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 514886 ']' 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 514886 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 514886 ']' 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 514886 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 514886 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 514886' 00:10:00.718 killing process with pid 514886 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 514886 00:10:00.718 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 514886 00:10:00.977 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:00.977 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:00.977 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:00.977 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:00.977 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:00.977 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.977 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.977 13:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.513 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:03.513 00:10:03.513 real 0m18.762s 00:10:03.513 user 0m56.032s 00:10:03.513 sys 0m3.371s 00:10:03.513 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.513 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.513 ************************************ 00:10:03.513 END TEST nvmf_connect_disconnect 00:10:03.513 ************************************ 00:10:03.513 13:38:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:03.514 13:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.514 13:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.514 13:38:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:03.514 ************************************ 00:10:03.514 START TEST nvmf_multitarget 00:10:03.514 ************************************ 00:10:03.514 13:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:03.514 * Looking for test storage... 00:10:03.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:03.514 13:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.476 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:05.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:05.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.477 13:39:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:05.477 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:05.477 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:05.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:10:05.477 00:10:05.477 --- 10.0.0.2 ping statistics --- 00:10:05.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.477 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:10:05.477 00:10:05.477 --- 10.0.0.1 ping statistics --- 00:10:05.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.477 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=518702 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 518702 00:10:05.477 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 518702 ']' 00:10:05.478 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.478 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.478 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.478 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.478 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:05.478 [2024-07-25 13:39:02.219332] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:05.478 [2024-07-25 13:39:02.219404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.478 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.478 [2024-07-25 13:39:02.283923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.478 [2024-07-25 13:39:02.392210] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.478 [2024-07-25 13:39:02.392273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.478 [2024-07-25 13:39:02.392303] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.478 [2024-07-25 13:39:02.392315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.478 [2024-07-25 13:39:02.392325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.478 [2024-07-25 13:39:02.392398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.478 [2024-07-25 13:39:02.392465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.478 [2024-07-25 13:39:02.392532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.478 [2024-07-25 13:39:02.392535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:05.736 "nvmf_tgt_1" 00:10:05.736 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:05.993 "nvmf_tgt_2" 00:10:05.993 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:05.993 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:05.993 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:05.993 13:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:06.251 true 00:10:06.251 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:06.251 true 00:10:06.251 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:06.251 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:06.510 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.511 rmmod nvme_tcp 00:10:06.511 rmmod nvme_fabrics 00:10:06.511 rmmod nvme_keyring 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 518702 ']' 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 518702 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 518702 ']' 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 518702 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 518702 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 518702' 00:10:06.511 killing process with pid 518702 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 518702 00:10:06.511 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 518702 00:10:06.770 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.770 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.770 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.770 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.770 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.770 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.770 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.770 13:39:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.675 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.675 00:10:08.675 real 0m5.730s 00:10:08.675 user 0m6.323s 00:10:08.675 sys 0m1.913s 00:10:08.675 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.675 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:08.675 ************************************ 00:10:08.675 END TEST nvmf_multitarget 00:10:08.675 ************************************ 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:08.933 ************************************ 00:10:08.933 START TEST nvmf_rpc 00:10:08.933 ************************************ 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:08.933 * Looking for test storage... 00:10:08.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.933 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.934 13:39:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:11.464 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:11.464 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.464 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:11.464 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:11.465 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.465 13:39:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:10:11.465 00:10:11.465 --- 10.0.0.2 ping statistics --- 00:10:11.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.465 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:10:11.465 00:10:11.465 --- 10.0.0.1 ping statistics --- 00:10:11.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.465 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=521185 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 521185 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 521185 ']' 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.465 [2024-07-25 13:39:08.138077] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:11.465 [2024-07-25 13:39:08.138154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.465 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.465 [2024-07-25 13:39:08.203336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.465 [2024-07-25 13:39:08.305742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.465 [2024-07-25 13:39:08.305797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.465 [2024-07-25 13:39:08.305826] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.465 [2024-07-25 13:39:08.305836] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.465 [2024-07-25 13:39:08.305846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.465 [2024-07-25 13:39:08.305937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.465 [2024-07-25 13:39:08.306048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.465 [2024-07-25 13:39:08.306170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.465 [2024-07-25 13:39:08.306175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.465 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:11.465 "tick_rate": 2700000000, 00:10:11.465 "poll_groups": [ 00:10:11.465 { 00:10:11.465 "name": "nvmf_tgt_poll_group_000", 00:10:11.465 "admin_qpairs": 0, 00:10:11.465 "io_qpairs": 0, 00:10:11.465 "current_admin_qpairs": 0, 00:10:11.465 "current_io_qpairs": 0, 00:10:11.465 "pending_bdev_io": 0, 00:10:11.465 "completed_nvme_io": 0, 00:10:11.465 "transports": [] 00:10:11.465 }, 00:10:11.465 { 00:10:11.465 "name": "nvmf_tgt_poll_group_001", 00:10:11.465 "admin_qpairs": 0, 00:10:11.465 "io_qpairs": 0, 00:10:11.465 "current_admin_qpairs": 0, 00:10:11.465 "current_io_qpairs": 0, 00:10:11.465 "pending_bdev_io": 0, 00:10:11.465 "completed_nvme_io": 0, 00:10:11.465 "transports": [] 00:10:11.466 }, 00:10:11.466 { 00:10:11.466 "name": "nvmf_tgt_poll_group_002", 00:10:11.466 "admin_qpairs": 0, 00:10:11.466 "io_qpairs": 0, 00:10:11.466 "current_admin_qpairs": 0, 00:10:11.466 "current_io_qpairs": 0, 00:10:11.466 "pending_bdev_io": 0, 00:10:11.466 "completed_nvme_io": 0, 00:10:11.466 "transports": [] 00:10:11.466 }, 00:10:11.466 { 00:10:11.466 "name": "nvmf_tgt_poll_group_003", 00:10:11.466 "admin_qpairs": 0, 00:10:11.466 "io_qpairs": 0, 00:10:11.466 "current_admin_qpairs": 0, 00:10:11.466 "current_io_qpairs": 0, 00:10:11.466 "pending_bdev_io": 0, 00:10:11.466 "completed_nvme_io": 0, 00:10:11.466 "transports": [] 00:10:11.466 } 00:10:11.466 ] 00:10:11.466 }' 00:10:11.466 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:11.466 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:11.466 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:11.466 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.724 [2024-07-25 13:39:08.558862] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.724 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:11.724 "tick_rate": 2700000000, 00:10:11.724 "poll_groups": [ 00:10:11.724 { 00:10:11.725 "name": "nvmf_tgt_poll_group_000", 00:10:11.725 "admin_qpairs": 0, 00:10:11.725 "io_qpairs": 0, 00:10:11.725 "current_admin_qpairs": 0, 00:10:11.725 "current_io_qpairs": 0, 00:10:11.725 "pending_bdev_io": 0, 00:10:11.725 "completed_nvme_io": 0, 00:10:11.725 "transports": [ 00:10:11.725 { 00:10:11.725 "trtype": "TCP" 00:10:11.725 } 00:10:11.725 ] 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "name": "nvmf_tgt_poll_group_001", 00:10:11.725 "admin_qpairs": 0, 00:10:11.725 "io_qpairs": 0, 00:10:11.725 "current_admin_qpairs": 0, 00:10:11.725 "current_io_qpairs": 0, 00:10:11.725 "pending_bdev_io": 0, 00:10:11.725 "completed_nvme_io": 0, 00:10:11.725 "transports": [ 00:10:11.725 { 00:10:11.725 "trtype": "TCP" 00:10:11.725 } 00:10:11.725 ] 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "name": "nvmf_tgt_poll_group_002", 00:10:11.725 "admin_qpairs": 0, 00:10:11.725 "io_qpairs": 0, 00:10:11.725 "current_admin_qpairs": 0, 00:10:11.725 "current_io_qpairs": 0, 00:10:11.725 "pending_bdev_io": 0, 00:10:11.725 "completed_nvme_io": 0, 00:10:11.725 "transports": [ 00:10:11.725 { 00:10:11.725 "trtype": "TCP" 00:10:11.725 } 00:10:11.725 ] 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "name": "nvmf_tgt_poll_group_003", 00:10:11.725 "admin_qpairs": 0, 00:10:11.725 "io_qpairs": 0, 00:10:11.725 "current_admin_qpairs": 0, 00:10:11.725 "current_io_qpairs": 0, 00:10:11.725 "pending_bdev_io": 0, 00:10:11.725 "completed_nvme_io": 0, 00:10:11.725 "transports": [ 00:10:11.725 { 00:10:11.725 "trtype": "TCP" 00:10:11.725 } 00:10:11.725 ] 00:10:11.725 } 00:10:11.725 ] 00:10:11.725 }' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.725 Malloc1 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.725 [2024-07-25 13:39:08.703714] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:11.725 [2024-07-25 13:39:08.726106] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:11.725 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:11.725 could not add new controller: failed to write to nvme-fabrics device 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.725 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.983 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.983 13:39:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.549 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.549 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:12.549 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.549 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:12.549 13:39:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.451 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.451 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.451 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.451 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:14.451 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.451 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:14.451 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.708 [2024-07-25 13:39:11.585533] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:14.708 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:14.708 could not add new controller: failed to write to nvme-fabrics device 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.708 13:39:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.275 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.275 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:15.275 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.275 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:15.275 13:39:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.804 [2024-07-25 13:39:14.373772] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.804 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.063 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.063 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.063 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.063 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.063 13:39:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:19.963 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:19.963 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:19.963 13:39:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.222 [2024-07-25 13:39:17.101963] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.222 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.788 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.788 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:20.788 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.788 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:20.788 13:39:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.324 [2024-07-25 13:39:19.912314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.324 13:39:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.582 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.582 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.582 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.582 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:23.582 13:39:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.159 [2024-07-25 13:39:22.738768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.159 13:39:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:26.418 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.418 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:26.418 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.418 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:26.418 13:39:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:28.955 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 [2024-07-25 13:39:25.559986] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 13:39:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:29.524 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:29.524 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:29.524 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.524 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:29.524 13:39:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.427 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 [2024-07-25 13:39:28.370076] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 [2024-07-25 13:39:28.418161] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.687 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.687 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 [2024-07-25 13:39:28.466324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 [2024-07-25 13:39:28.514523] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 [2024-07-25 13:39:28.562666] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:31.688 "tick_rate": 2700000000, 00:10:31.688 "poll_groups": [ 00:10:31.688 { 00:10:31.688 "name": "nvmf_tgt_poll_group_000", 00:10:31.688 "admin_qpairs": 2, 00:10:31.688 "io_qpairs": 84, 00:10:31.688 "current_admin_qpairs": 0, 00:10:31.688 "current_io_qpairs": 0, 00:10:31.688 "pending_bdev_io": 0, 00:10:31.688 "completed_nvme_io": 128, 00:10:31.688 "transports": [ 00:10:31.688 { 00:10:31.688 "trtype": "TCP" 00:10:31.688 } 00:10:31.688 ] 00:10:31.688 }, 00:10:31.688 { 00:10:31.688 "name": "nvmf_tgt_poll_group_001", 00:10:31.688 "admin_qpairs": 2, 00:10:31.688 "io_qpairs": 84, 00:10:31.688 "current_admin_qpairs": 0, 00:10:31.688 "current_io_qpairs": 0, 00:10:31.688 "pending_bdev_io": 0, 00:10:31.688 "completed_nvme_io": 239, 00:10:31.688 "transports": [ 00:10:31.688 { 00:10:31.688 "trtype": "TCP" 00:10:31.688 } 00:10:31.688 ] 00:10:31.688 }, 00:10:31.688 { 00:10:31.688 "name": "nvmf_tgt_poll_group_002", 00:10:31.688 "admin_qpairs": 1, 00:10:31.688 "io_qpairs": 84, 00:10:31.688 "current_admin_qpairs": 0, 00:10:31.688 "current_io_qpairs": 0, 00:10:31.688 "pending_bdev_io": 0, 00:10:31.688 "completed_nvme_io": 160, 00:10:31.688 "transports": [ 00:10:31.688 { 00:10:31.688 "trtype": "TCP" 00:10:31.688 } 00:10:31.688 ] 00:10:31.688 }, 00:10:31.688 { 00:10:31.688 "name": "nvmf_tgt_poll_group_003", 00:10:31.688 "admin_qpairs": 2, 00:10:31.688 "io_qpairs": 84, 00:10:31.688 "current_admin_qpairs": 0, 00:10:31.688 "current_io_qpairs": 0, 00:10:31.688 "pending_bdev_io": 0, 00:10:31.688 "completed_nvme_io": 159, 00:10:31.688 "transports": [ 00:10:31.688 { 00:10:31.688 "trtype": "TCP" 00:10:31.688 } 00:10:31.688 ] 00:10:31.688 } 00:10:31.688 ] 00:10:31.688 }' 00:10:31.688 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.689 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:31.689 rmmod nvme_tcp 00:10:31.948 rmmod nvme_fabrics 00:10:31.948 rmmod nvme_keyring 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 521185 ']' 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 521185 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 521185 ']' 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 521185 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 521185 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 521185' 00:10:31.948 killing process with pid 521185 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 521185 00:10:31.948 13:39:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 521185 00:10:32.208 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.208 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.208 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.208 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.208 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.208 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.208 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.208 13:39:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.182 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:34.182 00:10:34.182 real 0m25.400s 00:10:34.182 user 1m22.167s 00:10:34.182 sys 0m4.236s 00:10:34.182 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.182 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:34.182 ************************************ 00:10:34.182 END TEST nvmf_rpc 00:10:34.182 ************************************ 00:10:34.182 13:39:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:34.182 13:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.182 13:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.182 13:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:34.182 ************************************ 00:10:34.182 START TEST nvmf_invalid 00:10:34.182 ************************************ 00:10:34.182 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:34.439 * Looking for test storage... 00:10:34.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.439 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:34.440 13:39:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:36.373 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.373 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.373 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.373 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.373 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:36.374 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:36.374 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:36.374 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:36.374 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:36.374 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:10:36.633 00:10:36.633 --- 10.0.0.2 ping statistics --- 00:10:36.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.633 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:10:36.633 00:10:36.633 --- 10.0.0.1 ping statistics --- 00:10:36.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.633 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=525854 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 525854 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 525854 ']' 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.633 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:36.633 [2024-07-25 13:39:33.623283] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:36.633 [2024-07-25 13:39:33.623376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.634 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.893 [2024-07-25 13:39:33.689706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.893 [2024-07-25 13:39:33.798252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.893 [2024-07-25 13:39:33.798305] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.893 [2024-07-25 13:39:33.798335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.893 [2024-07-25 13:39:33.798347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.893 [2024-07-25 13:39:33.798357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.893 [2024-07-25 13:39:33.798486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.893 [2024-07-25 13:39:33.798552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.893 [2024-07-25 13:39:33.798622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.893 [2024-07-25 13:39:33.798624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.893 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.893 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:10:36.893 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.893 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.893 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:37.149 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.149 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:37.149 13:39:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26297 00:10:37.406 [2024-07-25 13:39:34.233463] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:37.406 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:37.406 { 00:10:37.406 "nqn": "nqn.2016-06.io.spdk:cnode26297", 00:10:37.406 "tgt_name": "foobar", 00:10:37.406 "method": "nvmf_create_subsystem", 00:10:37.406 "req_id": 1 00:10:37.406 } 00:10:37.406 Got JSON-RPC error response 00:10:37.406 response: 00:10:37.406 { 00:10:37.406 "code": -32603, 00:10:37.406 "message": "Unable to find target foobar" 00:10:37.406 }' 00:10:37.406 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:37.406 { 00:10:37.406 "nqn": "nqn.2016-06.io.spdk:cnode26297", 00:10:37.406 "tgt_name": "foobar", 00:10:37.406 "method": "nvmf_create_subsystem", 00:10:37.406 "req_id": 1 00:10:37.406 } 00:10:37.406 Got JSON-RPC error response 00:10:37.406 response: 00:10:37.406 { 00:10:37.406 "code": -32603, 00:10:37.406 "message": "Unable to find target foobar" 00:10:37.406 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:37.406 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:37.406 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16205 00:10:37.663 [2024-07-25 13:39:34.534450] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16205: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:37.663 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:37.663 { 00:10:37.663 "nqn": "nqn.2016-06.io.spdk:cnode16205", 00:10:37.663 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:37.663 "method": "nvmf_create_subsystem", 00:10:37.663 "req_id": 1 00:10:37.663 } 00:10:37.664 Got JSON-RPC error response 00:10:37.664 response: 00:10:37.664 { 00:10:37.664 "code": -32602, 00:10:37.664 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:37.664 }' 00:10:37.664 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:37.664 { 00:10:37.664 "nqn": "nqn.2016-06.io.spdk:cnode16205", 00:10:37.664 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:37.664 "method": "nvmf_create_subsystem", 00:10:37.664 "req_id": 1 00:10:37.664 } 00:10:37.664 Got JSON-RPC error response 00:10:37.664 response: 00:10:37.664 { 00:10:37.664 "code": -32602, 00:10:37.664 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:37.664 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:37.664 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:37.664 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27686 00:10:37.922 [2024-07-25 13:39:34.827432] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27686: invalid model number 'SPDK_Controller' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:37.922 { 00:10:37.922 "nqn": "nqn.2016-06.io.spdk:cnode27686", 00:10:37.922 "model_number": "SPDK_Controller\u001f", 00:10:37.922 "method": "nvmf_create_subsystem", 00:10:37.922 "req_id": 1 00:10:37.922 } 00:10:37.922 Got JSON-RPC error response 00:10:37.922 response: 00:10:37.922 { 00:10:37.922 "code": -32602, 00:10:37.922 "message": "Invalid MN SPDK_Controller\u001f" 00:10:37.922 }' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:37.922 { 00:10:37.922 "nqn": "nqn.2016-06.io.spdk:cnode27686", 00:10:37.922 "model_number": "SPDK_Controller\u001f", 00:10:37.922 "method": "nvmf_create_subsystem", 00:10:37.922 "req_id": 1 00:10:37.922 } 00:10:37.922 Got JSON-RPC error response 00:10:37.922 response: 00:10:37.922 { 00:10:37.922 "code": -32602, 00:10:37.922 "message": "Invalid MN SPDK_Controller\u001f" 00:10:37.922 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:37.922 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'UdOYA[B$]ZWr<1r5qZA.#' 00:10:37.923 13:39:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'UdOYA[B$]ZWr<1r5qZA.#' nqn.2016-06.io.spdk:cnode2889 00:10:38.184 [2024-07-25 13:39:35.136383] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2889: invalid serial number 'UdOYA[B$]ZWr<1r5qZA.#' 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:38.184 { 00:10:38.184 "nqn": "nqn.2016-06.io.spdk:cnode2889", 00:10:38.184 "serial_number": "UdOYA[B$]ZWr<1r5qZA.#", 00:10:38.184 "method": "nvmf_create_subsystem", 00:10:38.184 "req_id": 1 00:10:38.184 } 00:10:38.184 Got JSON-RPC error response 00:10:38.184 response: 00:10:38.184 { 00:10:38.184 "code": -32602, 00:10:38.184 "message": "Invalid SN UdOYA[B$]ZWr<1r5qZA.#" 00:10:38.184 }' 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:38.184 { 00:10:38.184 "nqn": "nqn.2016-06.io.spdk:cnode2889", 00:10:38.184 "serial_number": "UdOYA[B$]ZWr<1r5qZA.#", 00:10:38.184 "method": "nvmf_create_subsystem", 00:10:38.184 "req_id": 1 00:10:38.184 } 00:10:38.184 Got JSON-RPC error response 00:10:38.184 response: 00:10:38.184 { 00:10:38.184 "code": -32602, 00:10:38.184 "message": "Invalid SN UdOYA[B$]ZWr<1r5qZA.#" 00:10:38.184 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:38.184 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:38.185 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:38.443 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ x == \- ]] 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'x'\''&Hr1Ju~dAvtb6_5.:eHxx-iJJ%lTZM_%}'\'']+]f' 00:10:38.444 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'x'\''&Hr1Ju~dAvtb6_5.:eHxx-iJJ%lTZM_%}'\'']+]f' nqn.2016-06.io.spdk:cnode2361 00:10:38.701 [2024-07-25 13:39:35.513636] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2361: invalid model number 'x'&Hr1Ju~dAvtb6_5.:eHxx-iJJ%lTZM_%}']+]f' 00:10:38.701 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:38.701 { 00:10:38.701 "nqn": "nqn.2016-06.io.spdk:cnode2361", 00:10:38.701 "model_number": "x'\''&Hr1Ju~dAvtb6_5.:eHxx-iJJ%lTZ\u007fM_%}'\'']+]f", 00:10:38.701 "method": "nvmf_create_subsystem", 00:10:38.701 "req_id": 1 00:10:38.701 } 00:10:38.701 Got JSON-RPC error response 00:10:38.701 response: 00:10:38.701 { 00:10:38.701 "code": -32602, 00:10:38.701 "message": "Invalid MN x'\''&Hr1Ju~dAvtb6_5.:eHxx-iJJ%lTZ\u007fM_%}'\'']+]f" 00:10:38.701 }' 00:10:38.701 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:38.701 { 00:10:38.701 "nqn": "nqn.2016-06.io.spdk:cnode2361", 00:10:38.701 "model_number": "x'&Hr1Ju~dAvtb6_5.:eHxx-iJJ%lTZ\u007fM_%}']+]f", 00:10:38.701 "method": "nvmf_create_subsystem", 00:10:38.701 "req_id": 1 00:10:38.701 } 00:10:38.701 Got JSON-RPC error response 00:10:38.701 response: 00:10:38.701 { 00:10:38.701 "code": -32602, 00:10:38.701 "message": "Invalid MN x'&Hr1Ju~dAvtb6_5.:eHxx-iJJ%lTZ\u007fM_%}']+]f" 00:10:38.701 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:38.701 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:38.959 [2024-07-25 13:39:35.762535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.959 13:39:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:39.217 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:39.217 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:39.217 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:39.217 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:39.217 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:39.474 [2024-07-25 13:39:36.276183] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:39.474 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:39.474 { 00:10:39.474 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:39.474 "listen_address": { 00:10:39.474 "trtype": "tcp", 00:10:39.474 "traddr": "", 00:10:39.474 "trsvcid": "4421" 00:10:39.474 }, 00:10:39.474 "method": "nvmf_subsystem_remove_listener", 00:10:39.474 "req_id": 1 00:10:39.474 } 00:10:39.474 Got JSON-RPC error response 00:10:39.474 response: 00:10:39.474 { 00:10:39.474 "code": -32602, 00:10:39.474 "message": "Invalid parameters" 00:10:39.474 }' 00:10:39.474 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:39.474 { 00:10:39.474 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:39.474 "listen_address": { 00:10:39.474 "trtype": "tcp", 00:10:39.474 "traddr": "", 00:10:39.474 "trsvcid": "4421" 00:10:39.474 }, 00:10:39.474 "method": "nvmf_subsystem_remove_listener", 00:10:39.474 "req_id": 1 00:10:39.474 } 00:10:39.474 Got JSON-RPC error response 00:10:39.474 response: 00:10:39.474 { 00:10:39.474 "code": -32602, 00:10:39.474 "message": "Invalid parameters" 00:10:39.474 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:39.474 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22148 -i 0 00:10:39.732 [2024-07-25 13:39:36.541015] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22148: invalid cntlid range [0-65519] 00:10:39.732 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:39.732 { 00:10:39.732 "nqn": "nqn.2016-06.io.spdk:cnode22148", 00:10:39.732 "min_cntlid": 0, 00:10:39.732 "method": "nvmf_create_subsystem", 00:10:39.732 "req_id": 1 00:10:39.732 } 00:10:39.732 Got JSON-RPC error response 00:10:39.732 response: 00:10:39.732 { 00:10:39.732 "code": -32602, 00:10:39.732 "message": "Invalid cntlid range [0-65519]" 00:10:39.732 }' 00:10:39.732 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:39.732 { 00:10:39.732 "nqn": "nqn.2016-06.io.spdk:cnode22148", 00:10:39.732 "min_cntlid": 0, 00:10:39.732 "method": "nvmf_create_subsystem", 00:10:39.732 "req_id": 1 00:10:39.732 } 00:10:39.732 Got JSON-RPC error response 00:10:39.732 response: 00:10:39.732 { 00:10:39.732 "code": -32602, 00:10:39.732 "message": "Invalid cntlid range [0-65519]" 00:10:39.732 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:39.732 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28989 -i 65520 00:10:39.990 [2024-07-25 13:39:36.789846] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28989: invalid cntlid range [65520-65519] 00:10:39.990 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:39.990 { 00:10:39.990 "nqn": "nqn.2016-06.io.spdk:cnode28989", 00:10:39.990 "min_cntlid": 65520, 00:10:39.990 "method": "nvmf_create_subsystem", 00:10:39.990 "req_id": 1 00:10:39.990 } 00:10:39.990 Got JSON-RPC error response 00:10:39.990 response: 00:10:39.990 { 00:10:39.990 "code": -32602, 00:10:39.990 "message": "Invalid cntlid range [65520-65519]" 00:10:39.990 }' 00:10:39.990 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:39.990 { 00:10:39.990 "nqn": "nqn.2016-06.io.spdk:cnode28989", 00:10:39.990 "min_cntlid": 65520, 00:10:39.990 "method": "nvmf_create_subsystem", 00:10:39.990 "req_id": 1 00:10:39.990 } 00:10:39.990 Got JSON-RPC error response 00:10:39.990 response: 00:10:39.990 { 00:10:39.990 "code": -32602, 00:10:39.990 "message": "Invalid cntlid range [65520-65519]" 00:10:39.990 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:39.990 13:39:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29911 -I 0 00:10:40.248 [2024-07-25 13:39:37.038734] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29911: invalid cntlid range [1-0] 00:10:40.248 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:40.248 { 00:10:40.248 "nqn": "nqn.2016-06.io.spdk:cnode29911", 00:10:40.248 "max_cntlid": 0, 00:10:40.248 "method": "nvmf_create_subsystem", 00:10:40.248 "req_id": 1 00:10:40.248 } 00:10:40.248 Got JSON-RPC error response 00:10:40.248 response: 00:10:40.248 { 00:10:40.248 "code": -32602, 00:10:40.248 "message": "Invalid cntlid range [1-0]" 00:10:40.248 }' 00:10:40.248 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:40.248 { 00:10:40.248 "nqn": "nqn.2016-06.io.spdk:cnode29911", 00:10:40.248 "max_cntlid": 0, 00:10:40.248 "method": "nvmf_create_subsystem", 00:10:40.248 "req_id": 1 00:10:40.248 } 00:10:40.248 Got JSON-RPC error response 00:10:40.248 response: 00:10:40.248 { 00:10:40.248 "code": -32602, 00:10:40.248 "message": "Invalid cntlid range [1-0]" 00:10:40.248 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:40.248 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21319 -I 65520 00:10:40.506 [2024-07-25 13:39:37.287508] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21319: invalid cntlid range [1-65520] 00:10:40.506 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:40.506 { 00:10:40.506 "nqn": "nqn.2016-06.io.spdk:cnode21319", 00:10:40.506 "max_cntlid": 65520, 00:10:40.506 "method": "nvmf_create_subsystem", 00:10:40.506 "req_id": 1 00:10:40.506 } 00:10:40.506 Got JSON-RPC error response 00:10:40.506 response: 00:10:40.506 { 00:10:40.506 "code": -32602, 00:10:40.506 "message": "Invalid cntlid range [1-65520]" 00:10:40.506 }' 00:10:40.506 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:40.506 { 00:10:40.506 "nqn": "nqn.2016-06.io.spdk:cnode21319", 00:10:40.506 "max_cntlid": 65520, 00:10:40.506 "method": "nvmf_create_subsystem", 00:10:40.506 "req_id": 1 00:10:40.506 } 00:10:40.506 Got JSON-RPC error response 00:10:40.506 response: 00:10:40.506 { 00:10:40.506 "code": -32602, 00:10:40.506 "message": "Invalid cntlid range [1-65520]" 00:10:40.506 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:40.506 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4766 -i 6 -I 5 00:10:40.764 [2024-07-25 13:39:37.548403] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4766: invalid cntlid range [6-5] 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:40.764 { 00:10:40.764 "nqn": "nqn.2016-06.io.spdk:cnode4766", 00:10:40.764 "min_cntlid": 6, 00:10:40.764 "max_cntlid": 5, 00:10:40.764 "method": "nvmf_create_subsystem", 00:10:40.764 "req_id": 1 00:10:40.764 } 00:10:40.764 Got JSON-RPC error response 00:10:40.764 response: 00:10:40.764 { 00:10:40.764 "code": -32602, 00:10:40.764 "message": "Invalid cntlid range [6-5]" 00:10:40.764 }' 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:40.764 { 00:10:40.764 "nqn": "nqn.2016-06.io.spdk:cnode4766", 00:10:40.764 "min_cntlid": 6, 00:10:40.764 "max_cntlid": 5, 00:10:40.764 "method": "nvmf_create_subsystem", 00:10:40.764 "req_id": 1 00:10:40.764 } 00:10:40.764 Got JSON-RPC error response 00:10:40.764 response: 00:10:40.764 { 00:10:40.764 "code": -32602, 00:10:40.764 "message": "Invalid cntlid range [6-5]" 00:10:40.764 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:40.764 { 00:10:40.764 "name": "foobar", 00:10:40.764 "method": "nvmf_delete_target", 00:10:40.764 "req_id": 1 00:10:40.764 } 00:10:40.764 Got JSON-RPC error response 00:10:40.764 response: 00:10:40.764 { 00:10:40.764 "code": -32602, 00:10:40.764 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:40.764 }' 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:40.764 { 00:10:40.764 "name": "foobar", 00:10:40.764 "method": "nvmf_delete_target", 00:10:40.764 "req_id": 1 00:10:40.764 } 00:10:40.764 Got JSON-RPC error response 00:10:40.764 response: 00:10:40.764 { 00:10:40.764 "code": -32602, 00:10:40.764 "message": "The specified target doesn't exist, cannot delete it." 00:10:40.764 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.764 rmmod nvme_tcp 00:10:40.764 rmmod nvme_fabrics 00:10:40.764 rmmod nvme_keyring 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 525854 ']' 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 525854 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 525854 ']' 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 525854 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 525854 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 525854' 00:10:40.764 killing process with pid 525854 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 525854 00:10:40.764 13:39:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 525854 00:10:41.021 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.021 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:41.021 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:41.021 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:41.021 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:41.021 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.021 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.021 13:39:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.557 00:10:43.557 real 0m8.850s 00:10:43.557 user 0m20.472s 00:10:43.557 sys 0m2.482s 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:43.557 ************************************ 00:10:43.557 END TEST nvmf_invalid 00:10:43.557 ************************************ 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.557 ************************************ 00:10:43.557 START TEST nvmf_connect_stress 00:10:43.557 ************************************ 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:43.557 * Looking for test storage... 00:10:43.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.557 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.558 13:39:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.462 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.462 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:45.462 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:45.462 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:45.462 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:45.462 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:45.462 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:45.462 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:45.462 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:45.463 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:45.463 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:45.463 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:45.463 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:45.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:10:45.463 00:10:45.463 --- 10.0.0.2 ping statistics --- 00:10:45.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.463 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:10:45.463 00:10:45.463 --- 10.0.0.1 ping statistics --- 00:10:45.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.463 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:45.463 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=528488 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 528488 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 528488 ']' 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.464 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.464 [2024-07-25 13:39:42.404121] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:45.464 [2024-07-25 13:39:42.404209] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.464 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.464 [2024-07-25 13:39:42.467102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:45.723 [2024-07-25 13:39:42.568405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.723 [2024-07-25 13:39:42.568472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.723 [2024-07-25 13:39:42.568500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.723 [2024-07-25 13:39:42.568511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.723 [2024-07-25 13:39:42.568520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.723 [2024-07-25 13:39:42.568605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.723 [2024-07-25 13:39:42.568667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.723 [2024-07-25 13:39:42.568670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 [2024-07-25 13:39:42.717577] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 [2024-07-25 13:39:42.745202] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.723 NULL1 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=528509 00:10:45.723 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.984 13:39:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.244 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.244 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:46.244 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.244 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.244 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.503 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.503 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:46.503 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.503 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.503 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.763 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.763 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:46.763 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.763 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.763 13:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.331 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.331 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:47.331 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.331 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.331 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.589 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.589 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:47.589 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.589 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.589 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.845 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.846 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:47.846 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.846 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.846 13:39:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.103 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.103 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:48.103 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.103 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.103 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.362 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.362 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:48.362 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.362 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.362 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.930 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.930 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:48.930 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.930 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.930 13:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.187 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.187 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:49.187 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.187 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.187 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.446 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.446 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:49.446 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.446 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.446 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.705 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.705 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:49.705 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.705 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.705 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.965 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.965 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:49.965 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.965 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.965 13:39:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.532 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.532 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:50.532 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.532 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.532 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.790 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.790 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:50.790 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.790 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.790 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.049 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.049 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:51.050 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.050 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.050 13:39:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.308 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.308 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:51.308 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.308 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.308 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.568 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.568 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:51.568 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.568 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.568 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.137 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.137 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:52.137 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.137 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.137 13:39:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.397 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.397 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:52.397 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.397 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.397 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.665 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.665 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:52.665 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.665 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.665 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.936 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.936 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:52.936 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.936 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.936 13:39:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.195 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.195 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:53.195 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.195 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.195 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.762 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.762 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:53.762 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.762 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.762 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.021 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.021 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:54.021 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.021 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.021 13:39:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.280 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.280 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:54.280 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.280 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.280 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.538 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.538 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:54.538 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.538 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.538 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.795 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.795 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:54.795 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.795 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.795 13:39:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.361 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.361 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:55.361 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.361 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.361 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.620 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.620 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:55.620 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.620 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.620 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.878 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.878 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:55.878 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.878 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.878 13:39:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.878 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 528509 00:10:56.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (528509) - No such process 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 528509 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.136 rmmod nvme_tcp 00:10:56.136 rmmod nvme_fabrics 00:10:56.136 rmmod nvme_keyring 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 528488 ']' 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 528488 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 528488 ']' 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 528488 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.136 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 528488 00:10:56.394 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:56.394 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:56.394 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 528488' 00:10:56.394 killing process with pid 528488 00:10:56.394 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 528488 00:10:56.394 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 528488 00:10:56.657 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.657 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.657 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.657 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.657 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.657 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.657 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.657 13:39:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:58.585 00:10:58.585 real 0m15.382s 00:10:58.585 user 0m38.557s 00:10:58.585 sys 0m5.887s 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.585 ************************************ 00:10:58.585 END TEST nvmf_connect_stress 00:10:58.585 ************************************ 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:58.585 ************************************ 00:10:58.585 START TEST nvmf_fused_ordering 00:10:58.585 ************************************ 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:58.585 * Looking for test storage... 00:10:58.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.585 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.586 13:39:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.122 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.122 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.122 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:01.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:01.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:01.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:01.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.123 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:01.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:11:01.124 00:11:01.124 --- 10.0.0.2 ping statistics --- 00:11:01.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.124 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:11:01.124 00:11:01.124 --- 10.0.0.1 ping statistics --- 00:11:01.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.124 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=531661 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 531661 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 531661 ']' 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.124 13:39:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.124 [2024-07-25 13:39:57.884114] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:01.124 [2024-07-25 13:39:57.884203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.124 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.124 [2024-07-25 13:39:57.953349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.124 [2024-07-25 13:39:58.060230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.124 [2024-07-25 13:39:58.060287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.124 [2024-07-25 13:39:58.060316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.124 [2024-07-25 13:39:58.060328] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.124 [2024-07-25 13:39:58.060339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.124 [2024-07-25 13:39:58.060390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.387 [2024-07-25 13:39:58.201104] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.387 [2024-07-25 13:39:58.217312] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.387 NULL1 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.387 13:39:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:01.387 [2024-07-25 13:39:58.261448] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:01.388 [2024-07-25 13:39:58.261486] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531796 ] 00:11:01.388 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.958 Attached to nqn.2016-06.io.spdk:cnode1 00:11:01.958 Namespace ID: 1 size: 1GB 00:11:01.958 fused_ordering(0) 00:11:01.958 fused_ordering(1) 00:11:01.958 fused_ordering(2) 00:11:01.958 fused_ordering(3) 00:11:01.958 fused_ordering(4) 00:11:01.958 fused_ordering(5) 00:11:01.958 fused_ordering(6) 00:11:01.958 fused_ordering(7) 00:11:01.958 fused_ordering(8) 00:11:01.958 fused_ordering(9) 00:11:01.958 fused_ordering(10) 00:11:01.958 fused_ordering(11) 00:11:01.958 fused_ordering(12) 00:11:01.958 fused_ordering(13) 00:11:01.958 fused_ordering(14) 00:11:01.958 fused_ordering(15) 00:11:01.958 fused_ordering(16) 00:11:01.958 fused_ordering(17) 00:11:01.958 fused_ordering(18) 00:11:01.958 fused_ordering(19) 00:11:01.958 fused_ordering(20) 00:11:01.958 fused_ordering(21) 00:11:01.958 fused_ordering(22) 00:11:01.958 fused_ordering(23) 00:11:01.958 fused_ordering(24) 00:11:01.958 fused_ordering(25) 00:11:01.958 fused_ordering(26) 00:11:01.958 fused_ordering(27) 00:11:01.958 fused_ordering(28) 00:11:01.958 fused_ordering(29) 00:11:01.958 fused_ordering(30) 00:11:01.958 fused_ordering(31) 00:11:01.958 fused_ordering(32) 00:11:01.958 fused_ordering(33) 00:11:01.958 fused_ordering(34) 00:11:01.958 fused_ordering(35) 00:11:01.958 fused_ordering(36) 00:11:01.958 fused_ordering(37) 00:11:01.958 fused_ordering(38) 00:11:01.958 fused_ordering(39) 00:11:01.958 fused_ordering(40) 00:11:01.958 fused_ordering(41) 00:11:01.958 fused_ordering(42) 00:11:01.958 fused_ordering(43) 00:11:01.958 fused_ordering(44) 00:11:01.958 fused_ordering(45) 00:11:01.958 fused_ordering(46) 00:11:01.958 fused_ordering(47) 00:11:01.958 fused_ordering(48) 00:11:01.958 fused_ordering(49) 00:11:01.958 fused_ordering(50) 00:11:01.958 fused_ordering(51) 00:11:01.958 fused_ordering(52) 00:11:01.958 fused_ordering(53) 00:11:01.958 fused_ordering(54) 00:11:01.958 fused_ordering(55) 00:11:01.958 fused_ordering(56) 00:11:01.958 fused_ordering(57) 00:11:01.958 fused_ordering(58) 00:11:01.958 fused_ordering(59) 00:11:01.958 fused_ordering(60) 00:11:01.958 fused_ordering(61) 00:11:01.958 fused_ordering(62) 00:11:01.958 fused_ordering(63) 00:11:01.958 fused_ordering(64) 00:11:01.958 fused_ordering(65) 00:11:01.958 fused_ordering(66) 00:11:01.958 fused_ordering(67) 00:11:01.958 fused_ordering(68) 00:11:01.958 fused_ordering(69) 00:11:01.958 fused_ordering(70) 00:11:01.958 fused_ordering(71) 00:11:01.958 fused_ordering(72) 00:11:01.958 fused_ordering(73) 00:11:01.958 fused_ordering(74) 00:11:01.958 fused_ordering(75) 00:11:01.958 fused_ordering(76) 00:11:01.958 fused_ordering(77) 00:11:01.958 fused_ordering(78) 00:11:01.958 fused_ordering(79) 00:11:01.958 fused_ordering(80) 00:11:01.958 fused_ordering(81) 00:11:01.958 fused_ordering(82) 00:11:01.958 fused_ordering(83) 00:11:01.958 fused_ordering(84) 00:11:01.958 fused_ordering(85) 00:11:01.958 fused_ordering(86) 00:11:01.958 fused_ordering(87) 00:11:01.958 fused_ordering(88) 00:11:01.958 fused_ordering(89) 00:11:01.958 fused_ordering(90) 00:11:01.958 fused_ordering(91) 00:11:01.958 fused_ordering(92) 00:11:01.958 fused_ordering(93) 00:11:01.958 fused_ordering(94) 00:11:01.958 fused_ordering(95) 00:11:01.958 fused_ordering(96) 00:11:01.958 fused_ordering(97) 00:11:01.958 fused_ordering(98) 00:11:01.958 fused_ordering(99) 00:11:01.958 fused_ordering(100) 00:11:01.958 fused_ordering(101) 00:11:01.958 fused_ordering(102) 00:11:01.958 fused_ordering(103) 00:11:01.958 fused_ordering(104) 00:11:01.958 fused_ordering(105) 00:11:01.958 fused_ordering(106) 00:11:01.958 fused_ordering(107) 00:11:01.958 fused_ordering(108) 00:11:01.958 fused_ordering(109) 00:11:01.958 fused_ordering(110) 00:11:01.958 fused_ordering(111) 00:11:01.958 fused_ordering(112) 00:11:01.958 fused_ordering(113) 00:11:01.958 fused_ordering(114) 00:11:01.958 fused_ordering(115) 00:11:01.958 fused_ordering(116) 00:11:01.958 fused_ordering(117) 00:11:01.958 fused_ordering(118) 00:11:01.958 fused_ordering(119) 00:11:01.958 fused_ordering(120) 00:11:01.958 fused_ordering(121) 00:11:01.958 fused_ordering(122) 00:11:01.958 fused_ordering(123) 00:11:01.958 fused_ordering(124) 00:11:01.958 fused_ordering(125) 00:11:01.958 fused_ordering(126) 00:11:01.958 fused_ordering(127) 00:11:01.958 fused_ordering(128) 00:11:01.958 fused_ordering(129) 00:11:01.958 fused_ordering(130) 00:11:01.958 fused_ordering(131) 00:11:01.958 fused_ordering(132) 00:11:01.958 fused_ordering(133) 00:11:01.958 fused_ordering(134) 00:11:01.958 fused_ordering(135) 00:11:01.958 fused_ordering(136) 00:11:01.958 fused_ordering(137) 00:11:01.958 fused_ordering(138) 00:11:01.958 fused_ordering(139) 00:11:01.958 fused_ordering(140) 00:11:01.958 fused_ordering(141) 00:11:01.958 fused_ordering(142) 00:11:01.958 fused_ordering(143) 00:11:01.958 fused_ordering(144) 00:11:01.958 fused_ordering(145) 00:11:01.958 fused_ordering(146) 00:11:01.958 fused_ordering(147) 00:11:01.958 fused_ordering(148) 00:11:01.958 fused_ordering(149) 00:11:01.958 fused_ordering(150) 00:11:01.958 fused_ordering(151) 00:11:01.958 fused_ordering(152) 00:11:01.958 fused_ordering(153) 00:11:01.958 fused_ordering(154) 00:11:01.958 fused_ordering(155) 00:11:01.958 fused_ordering(156) 00:11:01.958 fused_ordering(157) 00:11:01.958 fused_ordering(158) 00:11:01.959 fused_ordering(159) 00:11:01.959 fused_ordering(160) 00:11:01.959 fused_ordering(161) 00:11:01.959 fused_ordering(162) 00:11:01.959 fused_ordering(163) 00:11:01.959 fused_ordering(164) 00:11:01.959 fused_ordering(165) 00:11:01.959 fused_ordering(166) 00:11:01.959 fused_ordering(167) 00:11:01.959 fused_ordering(168) 00:11:01.959 fused_ordering(169) 00:11:01.959 fused_ordering(170) 00:11:01.959 fused_ordering(171) 00:11:01.959 fused_ordering(172) 00:11:01.959 fused_ordering(173) 00:11:01.959 fused_ordering(174) 00:11:01.959 fused_ordering(175) 00:11:01.959 fused_ordering(176) 00:11:01.959 fused_ordering(177) 00:11:01.959 fused_ordering(178) 00:11:01.959 fused_ordering(179) 00:11:01.959 fused_ordering(180) 00:11:01.959 fused_ordering(181) 00:11:01.959 fused_ordering(182) 00:11:01.959 fused_ordering(183) 00:11:01.959 fused_ordering(184) 00:11:01.959 fused_ordering(185) 00:11:01.959 fused_ordering(186) 00:11:01.959 fused_ordering(187) 00:11:01.959 fused_ordering(188) 00:11:01.959 fused_ordering(189) 00:11:01.959 fused_ordering(190) 00:11:01.959 fused_ordering(191) 00:11:01.959 fused_ordering(192) 00:11:01.959 fused_ordering(193) 00:11:01.959 fused_ordering(194) 00:11:01.959 fused_ordering(195) 00:11:01.959 fused_ordering(196) 00:11:01.959 fused_ordering(197) 00:11:01.959 fused_ordering(198) 00:11:01.959 fused_ordering(199) 00:11:01.959 fused_ordering(200) 00:11:01.959 fused_ordering(201) 00:11:01.959 fused_ordering(202) 00:11:01.959 fused_ordering(203) 00:11:01.959 fused_ordering(204) 00:11:01.959 fused_ordering(205) 00:11:02.218 fused_ordering(206) 00:11:02.218 fused_ordering(207) 00:11:02.218 fused_ordering(208) 00:11:02.218 fused_ordering(209) 00:11:02.218 fused_ordering(210) 00:11:02.218 fused_ordering(211) 00:11:02.218 fused_ordering(212) 00:11:02.218 fused_ordering(213) 00:11:02.218 fused_ordering(214) 00:11:02.218 fused_ordering(215) 00:11:02.218 fused_ordering(216) 00:11:02.218 fused_ordering(217) 00:11:02.218 fused_ordering(218) 00:11:02.218 fused_ordering(219) 00:11:02.218 fused_ordering(220) 00:11:02.218 fused_ordering(221) 00:11:02.218 fused_ordering(222) 00:11:02.218 fused_ordering(223) 00:11:02.218 fused_ordering(224) 00:11:02.218 fused_ordering(225) 00:11:02.218 fused_ordering(226) 00:11:02.218 fused_ordering(227) 00:11:02.218 fused_ordering(228) 00:11:02.218 fused_ordering(229) 00:11:02.218 fused_ordering(230) 00:11:02.218 fused_ordering(231) 00:11:02.218 fused_ordering(232) 00:11:02.218 fused_ordering(233) 00:11:02.218 fused_ordering(234) 00:11:02.218 fused_ordering(235) 00:11:02.218 fused_ordering(236) 00:11:02.218 fused_ordering(237) 00:11:02.218 fused_ordering(238) 00:11:02.218 fused_ordering(239) 00:11:02.218 fused_ordering(240) 00:11:02.218 fused_ordering(241) 00:11:02.218 fused_ordering(242) 00:11:02.218 fused_ordering(243) 00:11:02.218 fused_ordering(244) 00:11:02.218 fused_ordering(245) 00:11:02.218 fused_ordering(246) 00:11:02.218 fused_ordering(247) 00:11:02.218 fused_ordering(248) 00:11:02.218 fused_ordering(249) 00:11:02.218 fused_ordering(250) 00:11:02.218 fused_ordering(251) 00:11:02.218 fused_ordering(252) 00:11:02.218 fused_ordering(253) 00:11:02.218 fused_ordering(254) 00:11:02.218 fused_ordering(255) 00:11:02.218 fused_ordering(256) 00:11:02.218 fused_ordering(257) 00:11:02.218 fused_ordering(258) 00:11:02.218 fused_ordering(259) 00:11:02.218 fused_ordering(260) 00:11:02.218 fused_ordering(261) 00:11:02.218 fused_ordering(262) 00:11:02.218 fused_ordering(263) 00:11:02.218 fused_ordering(264) 00:11:02.218 fused_ordering(265) 00:11:02.218 fused_ordering(266) 00:11:02.218 fused_ordering(267) 00:11:02.218 fused_ordering(268) 00:11:02.218 fused_ordering(269) 00:11:02.218 fused_ordering(270) 00:11:02.218 fused_ordering(271) 00:11:02.218 fused_ordering(272) 00:11:02.218 fused_ordering(273) 00:11:02.218 fused_ordering(274) 00:11:02.218 fused_ordering(275) 00:11:02.218 fused_ordering(276) 00:11:02.218 fused_ordering(277) 00:11:02.218 fused_ordering(278) 00:11:02.218 fused_ordering(279) 00:11:02.218 fused_ordering(280) 00:11:02.218 fused_ordering(281) 00:11:02.218 fused_ordering(282) 00:11:02.218 fused_ordering(283) 00:11:02.218 fused_ordering(284) 00:11:02.218 fused_ordering(285) 00:11:02.218 fused_ordering(286) 00:11:02.218 fused_ordering(287) 00:11:02.218 fused_ordering(288) 00:11:02.218 fused_ordering(289) 00:11:02.218 fused_ordering(290) 00:11:02.218 fused_ordering(291) 00:11:02.218 fused_ordering(292) 00:11:02.218 fused_ordering(293) 00:11:02.218 fused_ordering(294) 00:11:02.218 fused_ordering(295) 00:11:02.218 fused_ordering(296) 00:11:02.218 fused_ordering(297) 00:11:02.218 fused_ordering(298) 00:11:02.218 fused_ordering(299) 00:11:02.218 fused_ordering(300) 00:11:02.218 fused_ordering(301) 00:11:02.218 fused_ordering(302) 00:11:02.218 fused_ordering(303) 00:11:02.218 fused_ordering(304) 00:11:02.218 fused_ordering(305) 00:11:02.218 fused_ordering(306) 00:11:02.218 fused_ordering(307) 00:11:02.218 fused_ordering(308) 00:11:02.218 fused_ordering(309) 00:11:02.218 fused_ordering(310) 00:11:02.218 fused_ordering(311) 00:11:02.218 fused_ordering(312) 00:11:02.218 fused_ordering(313) 00:11:02.218 fused_ordering(314) 00:11:02.218 fused_ordering(315) 00:11:02.218 fused_ordering(316) 00:11:02.218 fused_ordering(317) 00:11:02.218 fused_ordering(318) 00:11:02.218 fused_ordering(319) 00:11:02.218 fused_ordering(320) 00:11:02.218 fused_ordering(321) 00:11:02.218 fused_ordering(322) 00:11:02.218 fused_ordering(323) 00:11:02.218 fused_ordering(324) 00:11:02.218 fused_ordering(325) 00:11:02.218 fused_ordering(326) 00:11:02.218 fused_ordering(327) 00:11:02.218 fused_ordering(328) 00:11:02.218 fused_ordering(329) 00:11:02.218 fused_ordering(330) 00:11:02.218 fused_ordering(331) 00:11:02.218 fused_ordering(332) 00:11:02.218 fused_ordering(333) 00:11:02.218 fused_ordering(334) 00:11:02.218 fused_ordering(335) 00:11:02.218 fused_ordering(336) 00:11:02.218 fused_ordering(337) 00:11:02.218 fused_ordering(338) 00:11:02.218 fused_ordering(339) 00:11:02.218 fused_ordering(340) 00:11:02.218 fused_ordering(341) 00:11:02.218 fused_ordering(342) 00:11:02.218 fused_ordering(343) 00:11:02.218 fused_ordering(344) 00:11:02.218 fused_ordering(345) 00:11:02.218 fused_ordering(346) 00:11:02.218 fused_ordering(347) 00:11:02.218 fused_ordering(348) 00:11:02.218 fused_ordering(349) 00:11:02.218 fused_ordering(350) 00:11:02.218 fused_ordering(351) 00:11:02.218 fused_ordering(352) 00:11:02.218 fused_ordering(353) 00:11:02.218 fused_ordering(354) 00:11:02.218 fused_ordering(355) 00:11:02.218 fused_ordering(356) 00:11:02.218 fused_ordering(357) 00:11:02.218 fused_ordering(358) 00:11:02.218 fused_ordering(359) 00:11:02.218 fused_ordering(360) 00:11:02.218 fused_ordering(361) 00:11:02.218 fused_ordering(362) 00:11:02.218 fused_ordering(363) 00:11:02.218 fused_ordering(364) 00:11:02.218 fused_ordering(365) 00:11:02.218 fused_ordering(366) 00:11:02.218 fused_ordering(367) 00:11:02.218 fused_ordering(368) 00:11:02.218 fused_ordering(369) 00:11:02.218 fused_ordering(370) 00:11:02.218 fused_ordering(371) 00:11:02.218 fused_ordering(372) 00:11:02.218 fused_ordering(373) 00:11:02.218 fused_ordering(374) 00:11:02.218 fused_ordering(375) 00:11:02.218 fused_ordering(376) 00:11:02.218 fused_ordering(377) 00:11:02.218 fused_ordering(378) 00:11:02.218 fused_ordering(379) 00:11:02.218 fused_ordering(380) 00:11:02.218 fused_ordering(381) 00:11:02.218 fused_ordering(382) 00:11:02.218 fused_ordering(383) 00:11:02.218 fused_ordering(384) 00:11:02.218 fused_ordering(385) 00:11:02.218 fused_ordering(386) 00:11:02.218 fused_ordering(387) 00:11:02.218 fused_ordering(388) 00:11:02.218 fused_ordering(389) 00:11:02.218 fused_ordering(390) 00:11:02.218 fused_ordering(391) 00:11:02.218 fused_ordering(392) 00:11:02.218 fused_ordering(393) 00:11:02.218 fused_ordering(394) 00:11:02.218 fused_ordering(395) 00:11:02.218 fused_ordering(396) 00:11:02.218 fused_ordering(397) 00:11:02.218 fused_ordering(398) 00:11:02.218 fused_ordering(399) 00:11:02.218 fused_ordering(400) 00:11:02.218 fused_ordering(401) 00:11:02.218 fused_ordering(402) 00:11:02.218 fused_ordering(403) 00:11:02.218 fused_ordering(404) 00:11:02.218 fused_ordering(405) 00:11:02.218 fused_ordering(406) 00:11:02.218 fused_ordering(407) 00:11:02.218 fused_ordering(408) 00:11:02.218 fused_ordering(409) 00:11:02.218 fused_ordering(410) 00:11:02.787 fused_ordering(411) 00:11:02.787 fused_ordering(412) 00:11:02.787 fused_ordering(413) 00:11:02.787 fused_ordering(414) 00:11:02.787 fused_ordering(415) 00:11:02.787 fused_ordering(416) 00:11:02.787 fused_ordering(417) 00:11:02.787 fused_ordering(418) 00:11:02.787 fused_ordering(419) 00:11:02.787 fused_ordering(420) 00:11:02.787 fused_ordering(421) 00:11:02.787 fused_ordering(422) 00:11:02.787 fused_ordering(423) 00:11:02.787 fused_ordering(424) 00:11:02.787 fused_ordering(425) 00:11:02.787 fused_ordering(426) 00:11:02.787 fused_ordering(427) 00:11:02.787 fused_ordering(428) 00:11:02.787 fused_ordering(429) 00:11:02.787 fused_ordering(430) 00:11:02.787 fused_ordering(431) 00:11:02.787 fused_ordering(432) 00:11:02.787 fused_ordering(433) 00:11:02.787 fused_ordering(434) 00:11:02.787 fused_ordering(435) 00:11:02.787 fused_ordering(436) 00:11:02.787 fused_ordering(437) 00:11:02.787 fused_ordering(438) 00:11:02.787 fused_ordering(439) 00:11:02.787 fused_ordering(440) 00:11:02.787 fused_ordering(441) 00:11:02.787 fused_ordering(442) 00:11:02.787 fused_ordering(443) 00:11:02.787 fused_ordering(444) 00:11:02.787 fused_ordering(445) 00:11:02.787 fused_ordering(446) 00:11:02.787 fused_ordering(447) 00:11:02.787 fused_ordering(448) 00:11:02.787 fused_ordering(449) 00:11:02.787 fused_ordering(450) 00:11:02.787 fused_ordering(451) 00:11:02.787 fused_ordering(452) 00:11:02.787 fused_ordering(453) 00:11:02.787 fused_ordering(454) 00:11:02.787 fused_ordering(455) 00:11:02.787 fused_ordering(456) 00:11:02.787 fused_ordering(457) 00:11:02.787 fused_ordering(458) 00:11:02.787 fused_ordering(459) 00:11:02.787 fused_ordering(460) 00:11:02.787 fused_ordering(461) 00:11:02.787 fused_ordering(462) 00:11:02.787 fused_ordering(463) 00:11:02.787 fused_ordering(464) 00:11:02.787 fused_ordering(465) 00:11:02.787 fused_ordering(466) 00:11:02.787 fused_ordering(467) 00:11:02.787 fused_ordering(468) 00:11:02.787 fused_ordering(469) 00:11:02.787 fused_ordering(470) 00:11:02.787 fused_ordering(471) 00:11:02.787 fused_ordering(472) 00:11:02.787 fused_ordering(473) 00:11:02.787 fused_ordering(474) 00:11:02.787 fused_ordering(475) 00:11:02.787 fused_ordering(476) 00:11:02.787 fused_ordering(477) 00:11:02.787 fused_ordering(478) 00:11:02.787 fused_ordering(479) 00:11:02.787 fused_ordering(480) 00:11:02.787 fused_ordering(481) 00:11:02.787 fused_ordering(482) 00:11:02.787 fused_ordering(483) 00:11:02.787 fused_ordering(484) 00:11:02.787 fused_ordering(485) 00:11:02.787 fused_ordering(486) 00:11:02.787 fused_ordering(487) 00:11:02.787 fused_ordering(488) 00:11:02.788 fused_ordering(489) 00:11:02.788 fused_ordering(490) 00:11:02.788 fused_ordering(491) 00:11:02.788 fused_ordering(492) 00:11:02.788 fused_ordering(493) 00:11:02.788 fused_ordering(494) 00:11:02.788 fused_ordering(495) 00:11:02.788 fused_ordering(496) 00:11:02.788 fused_ordering(497) 00:11:02.788 fused_ordering(498) 00:11:02.788 fused_ordering(499) 00:11:02.788 fused_ordering(500) 00:11:02.788 fused_ordering(501) 00:11:02.788 fused_ordering(502) 00:11:02.788 fused_ordering(503) 00:11:02.788 fused_ordering(504) 00:11:02.788 fused_ordering(505) 00:11:02.788 fused_ordering(506) 00:11:02.788 fused_ordering(507) 00:11:02.788 fused_ordering(508) 00:11:02.788 fused_ordering(509) 00:11:02.788 fused_ordering(510) 00:11:02.788 fused_ordering(511) 00:11:02.788 fused_ordering(512) 00:11:02.788 fused_ordering(513) 00:11:02.788 fused_ordering(514) 00:11:02.788 fused_ordering(515) 00:11:02.788 fused_ordering(516) 00:11:02.788 fused_ordering(517) 00:11:02.788 fused_ordering(518) 00:11:02.788 fused_ordering(519) 00:11:02.788 fused_ordering(520) 00:11:02.788 fused_ordering(521) 00:11:02.788 fused_ordering(522) 00:11:02.788 fused_ordering(523) 00:11:02.788 fused_ordering(524) 00:11:02.788 fused_ordering(525) 00:11:02.788 fused_ordering(526) 00:11:02.788 fused_ordering(527) 00:11:02.788 fused_ordering(528) 00:11:02.788 fused_ordering(529) 00:11:02.788 fused_ordering(530) 00:11:02.788 fused_ordering(531) 00:11:02.788 fused_ordering(532) 00:11:02.788 fused_ordering(533) 00:11:02.788 fused_ordering(534) 00:11:02.788 fused_ordering(535) 00:11:02.788 fused_ordering(536) 00:11:02.788 fused_ordering(537) 00:11:02.788 fused_ordering(538) 00:11:02.788 fused_ordering(539) 00:11:02.788 fused_ordering(540) 00:11:02.788 fused_ordering(541) 00:11:02.788 fused_ordering(542) 00:11:02.788 fused_ordering(543) 00:11:02.788 fused_ordering(544) 00:11:02.788 fused_ordering(545) 00:11:02.788 fused_ordering(546) 00:11:02.788 fused_ordering(547) 00:11:02.788 fused_ordering(548) 00:11:02.788 fused_ordering(549) 00:11:02.788 fused_ordering(550) 00:11:02.788 fused_ordering(551) 00:11:02.788 fused_ordering(552) 00:11:02.788 fused_ordering(553) 00:11:02.788 fused_ordering(554) 00:11:02.788 fused_ordering(555) 00:11:02.788 fused_ordering(556) 00:11:02.788 fused_ordering(557) 00:11:02.788 fused_ordering(558) 00:11:02.788 fused_ordering(559) 00:11:02.788 fused_ordering(560) 00:11:02.788 fused_ordering(561) 00:11:02.788 fused_ordering(562) 00:11:02.788 fused_ordering(563) 00:11:02.788 fused_ordering(564) 00:11:02.788 fused_ordering(565) 00:11:02.788 fused_ordering(566) 00:11:02.788 fused_ordering(567) 00:11:02.788 fused_ordering(568) 00:11:02.788 fused_ordering(569) 00:11:02.788 fused_ordering(570) 00:11:02.788 fused_ordering(571) 00:11:02.788 fused_ordering(572) 00:11:02.788 fused_ordering(573) 00:11:02.788 fused_ordering(574) 00:11:02.788 fused_ordering(575) 00:11:02.788 fused_ordering(576) 00:11:02.788 fused_ordering(577) 00:11:02.788 fused_ordering(578) 00:11:02.788 fused_ordering(579) 00:11:02.788 fused_ordering(580) 00:11:02.788 fused_ordering(581) 00:11:02.788 fused_ordering(582) 00:11:02.788 fused_ordering(583) 00:11:02.788 fused_ordering(584) 00:11:02.788 fused_ordering(585) 00:11:02.788 fused_ordering(586) 00:11:02.788 fused_ordering(587) 00:11:02.788 fused_ordering(588) 00:11:02.788 fused_ordering(589) 00:11:02.788 fused_ordering(590) 00:11:02.788 fused_ordering(591) 00:11:02.788 fused_ordering(592) 00:11:02.788 fused_ordering(593) 00:11:02.788 fused_ordering(594) 00:11:02.788 fused_ordering(595) 00:11:02.788 fused_ordering(596) 00:11:02.788 fused_ordering(597) 00:11:02.788 fused_ordering(598) 00:11:02.788 fused_ordering(599) 00:11:02.788 fused_ordering(600) 00:11:02.788 fused_ordering(601) 00:11:02.788 fused_ordering(602) 00:11:02.788 fused_ordering(603) 00:11:02.788 fused_ordering(604) 00:11:02.788 fused_ordering(605) 00:11:02.788 fused_ordering(606) 00:11:02.788 fused_ordering(607) 00:11:02.788 fused_ordering(608) 00:11:02.788 fused_ordering(609) 00:11:02.788 fused_ordering(610) 00:11:02.788 fused_ordering(611) 00:11:02.788 fused_ordering(612) 00:11:02.788 fused_ordering(613) 00:11:02.788 fused_ordering(614) 00:11:02.788 fused_ordering(615) 00:11:03.047 fused_ordering(616) 00:11:03.047 fused_ordering(617) 00:11:03.047 fused_ordering(618) 00:11:03.047 fused_ordering(619) 00:11:03.047 fused_ordering(620) 00:11:03.047 fused_ordering(621) 00:11:03.047 fused_ordering(622) 00:11:03.047 fused_ordering(623) 00:11:03.047 fused_ordering(624) 00:11:03.047 fused_ordering(625) 00:11:03.047 fused_ordering(626) 00:11:03.047 fused_ordering(627) 00:11:03.047 fused_ordering(628) 00:11:03.047 fused_ordering(629) 00:11:03.047 fused_ordering(630) 00:11:03.047 fused_ordering(631) 00:11:03.047 fused_ordering(632) 00:11:03.047 fused_ordering(633) 00:11:03.047 fused_ordering(634) 00:11:03.047 fused_ordering(635) 00:11:03.047 fused_ordering(636) 00:11:03.047 fused_ordering(637) 00:11:03.047 fused_ordering(638) 00:11:03.047 fused_ordering(639) 00:11:03.047 fused_ordering(640) 00:11:03.047 fused_ordering(641) 00:11:03.047 fused_ordering(642) 00:11:03.047 fused_ordering(643) 00:11:03.047 fused_ordering(644) 00:11:03.047 fused_ordering(645) 00:11:03.047 fused_ordering(646) 00:11:03.047 fused_ordering(647) 00:11:03.047 fused_ordering(648) 00:11:03.047 fused_ordering(649) 00:11:03.047 fused_ordering(650) 00:11:03.047 fused_ordering(651) 00:11:03.047 fused_ordering(652) 00:11:03.047 fused_ordering(653) 00:11:03.047 fused_ordering(654) 00:11:03.047 fused_ordering(655) 00:11:03.047 fused_ordering(656) 00:11:03.047 fused_ordering(657) 00:11:03.047 fused_ordering(658) 00:11:03.047 fused_ordering(659) 00:11:03.047 fused_ordering(660) 00:11:03.047 fused_ordering(661) 00:11:03.047 fused_ordering(662) 00:11:03.047 fused_ordering(663) 00:11:03.047 fused_ordering(664) 00:11:03.047 fused_ordering(665) 00:11:03.047 fused_ordering(666) 00:11:03.047 fused_ordering(667) 00:11:03.047 fused_ordering(668) 00:11:03.047 fused_ordering(669) 00:11:03.047 fused_ordering(670) 00:11:03.047 fused_ordering(671) 00:11:03.047 fused_ordering(672) 00:11:03.047 fused_ordering(673) 00:11:03.047 fused_ordering(674) 00:11:03.047 fused_ordering(675) 00:11:03.047 fused_ordering(676) 00:11:03.047 fused_ordering(677) 00:11:03.047 fused_ordering(678) 00:11:03.047 fused_ordering(679) 00:11:03.047 fused_ordering(680) 00:11:03.047 fused_ordering(681) 00:11:03.047 fused_ordering(682) 00:11:03.047 fused_ordering(683) 00:11:03.047 fused_ordering(684) 00:11:03.047 fused_ordering(685) 00:11:03.047 fused_ordering(686) 00:11:03.047 fused_ordering(687) 00:11:03.047 fused_ordering(688) 00:11:03.047 fused_ordering(689) 00:11:03.047 fused_ordering(690) 00:11:03.047 fused_ordering(691) 00:11:03.047 fused_ordering(692) 00:11:03.047 fused_ordering(693) 00:11:03.047 fused_ordering(694) 00:11:03.047 fused_ordering(695) 00:11:03.047 fused_ordering(696) 00:11:03.047 fused_ordering(697) 00:11:03.047 fused_ordering(698) 00:11:03.047 fused_ordering(699) 00:11:03.047 fused_ordering(700) 00:11:03.047 fused_ordering(701) 00:11:03.047 fused_ordering(702) 00:11:03.047 fused_ordering(703) 00:11:03.047 fused_ordering(704) 00:11:03.047 fused_ordering(705) 00:11:03.047 fused_ordering(706) 00:11:03.047 fused_ordering(707) 00:11:03.047 fused_ordering(708) 00:11:03.047 fused_ordering(709) 00:11:03.047 fused_ordering(710) 00:11:03.047 fused_ordering(711) 00:11:03.047 fused_ordering(712) 00:11:03.047 fused_ordering(713) 00:11:03.047 fused_ordering(714) 00:11:03.047 fused_ordering(715) 00:11:03.047 fused_ordering(716) 00:11:03.047 fused_ordering(717) 00:11:03.047 fused_ordering(718) 00:11:03.047 fused_ordering(719) 00:11:03.047 fused_ordering(720) 00:11:03.047 fused_ordering(721) 00:11:03.047 fused_ordering(722) 00:11:03.047 fused_ordering(723) 00:11:03.047 fused_ordering(724) 00:11:03.047 fused_ordering(725) 00:11:03.047 fused_ordering(726) 00:11:03.047 fused_ordering(727) 00:11:03.047 fused_ordering(728) 00:11:03.047 fused_ordering(729) 00:11:03.047 fused_ordering(730) 00:11:03.047 fused_ordering(731) 00:11:03.047 fused_ordering(732) 00:11:03.047 fused_ordering(733) 00:11:03.047 fused_ordering(734) 00:11:03.047 fused_ordering(735) 00:11:03.047 fused_ordering(736) 00:11:03.047 fused_ordering(737) 00:11:03.047 fused_ordering(738) 00:11:03.047 fused_ordering(739) 00:11:03.047 fused_ordering(740) 00:11:03.047 fused_ordering(741) 00:11:03.047 fused_ordering(742) 00:11:03.047 fused_ordering(743) 00:11:03.047 fused_ordering(744) 00:11:03.047 fused_ordering(745) 00:11:03.047 fused_ordering(746) 00:11:03.047 fused_ordering(747) 00:11:03.047 fused_ordering(748) 00:11:03.047 fused_ordering(749) 00:11:03.047 fused_ordering(750) 00:11:03.047 fused_ordering(751) 00:11:03.047 fused_ordering(752) 00:11:03.047 fused_ordering(753) 00:11:03.047 fused_ordering(754) 00:11:03.047 fused_ordering(755) 00:11:03.047 fused_ordering(756) 00:11:03.047 fused_ordering(757) 00:11:03.047 fused_ordering(758) 00:11:03.047 fused_ordering(759) 00:11:03.047 fused_ordering(760) 00:11:03.047 fused_ordering(761) 00:11:03.047 fused_ordering(762) 00:11:03.047 fused_ordering(763) 00:11:03.047 fused_ordering(764) 00:11:03.047 fused_ordering(765) 00:11:03.047 fused_ordering(766) 00:11:03.047 fused_ordering(767) 00:11:03.047 fused_ordering(768) 00:11:03.047 fused_ordering(769) 00:11:03.047 fused_ordering(770) 00:11:03.047 fused_ordering(771) 00:11:03.047 fused_ordering(772) 00:11:03.047 fused_ordering(773) 00:11:03.047 fused_ordering(774) 00:11:03.047 fused_ordering(775) 00:11:03.047 fused_ordering(776) 00:11:03.047 fused_ordering(777) 00:11:03.047 fused_ordering(778) 00:11:03.047 fused_ordering(779) 00:11:03.047 fused_ordering(780) 00:11:03.047 fused_ordering(781) 00:11:03.047 fused_ordering(782) 00:11:03.047 fused_ordering(783) 00:11:03.047 fused_ordering(784) 00:11:03.048 fused_ordering(785) 00:11:03.048 fused_ordering(786) 00:11:03.048 fused_ordering(787) 00:11:03.048 fused_ordering(788) 00:11:03.048 fused_ordering(789) 00:11:03.048 fused_ordering(790) 00:11:03.048 fused_ordering(791) 00:11:03.048 fused_ordering(792) 00:11:03.048 fused_ordering(793) 00:11:03.048 fused_ordering(794) 00:11:03.048 fused_ordering(795) 00:11:03.048 fused_ordering(796) 00:11:03.048 fused_ordering(797) 00:11:03.048 fused_ordering(798) 00:11:03.048 fused_ordering(799) 00:11:03.048 fused_ordering(800) 00:11:03.048 fused_ordering(801) 00:11:03.048 fused_ordering(802) 00:11:03.048 fused_ordering(803) 00:11:03.048 fused_ordering(804) 00:11:03.048 fused_ordering(805) 00:11:03.048 fused_ordering(806) 00:11:03.048 fused_ordering(807) 00:11:03.048 fused_ordering(808) 00:11:03.048 fused_ordering(809) 00:11:03.048 fused_ordering(810) 00:11:03.048 fused_ordering(811) 00:11:03.048 fused_ordering(812) 00:11:03.048 fused_ordering(813) 00:11:03.048 fused_ordering(814) 00:11:03.048 fused_ordering(815) 00:11:03.048 fused_ordering(816) 00:11:03.048 fused_ordering(817) 00:11:03.048 fused_ordering(818) 00:11:03.048 fused_ordering(819) 00:11:03.048 fused_ordering(820) 00:11:03.985 fused_ordering(821) 00:11:03.985 fused_ordering(822) 00:11:03.985 fused_ordering(823) 00:11:03.985 fused_ordering(824) 00:11:03.985 fused_ordering(825) 00:11:03.985 fused_ordering(826) 00:11:03.985 fused_ordering(827) 00:11:03.985 fused_ordering(828) 00:11:03.985 fused_ordering(829) 00:11:03.985 fused_ordering(830) 00:11:03.985 fused_ordering(831) 00:11:03.985 fused_ordering(832) 00:11:03.985 fused_ordering(833) 00:11:03.985 fused_ordering(834) 00:11:03.985 fused_ordering(835) 00:11:03.985 fused_ordering(836) 00:11:03.985 fused_ordering(837) 00:11:03.985 fused_ordering(838) 00:11:03.985 fused_ordering(839) 00:11:03.985 fused_ordering(840) 00:11:03.985 fused_ordering(841) 00:11:03.985 fused_ordering(842) 00:11:03.985 fused_ordering(843) 00:11:03.985 fused_ordering(844) 00:11:03.985 fused_ordering(845) 00:11:03.985 fused_ordering(846) 00:11:03.985 fused_ordering(847) 00:11:03.985 fused_ordering(848) 00:11:03.985 fused_ordering(849) 00:11:03.985 fused_ordering(850) 00:11:03.985 fused_ordering(851) 00:11:03.985 fused_ordering(852) 00:11:03.985 fused_ordering(853) 00:11:03.985 fused_ordering(854) 00:11:03.985 fused_ordering(855) 00:11:03.985 fused_ordering(856) 00:11:03.985 fused_ordering(857) 00:11:03.985 fused_ordering(858) 00:11:03.985 fused_ordering(859) 00:11:03.985 fused_ordering(860) 00:11:03.985 fused_ordering(861) 00:11:03.985 fused_ordering(862) 00:11:03.985 fused_ordering(863) 00:11:03.985 fused_ordering(864) 00:11:03.985 fused_ordering(865) 00:11:03.985 fused_ordering(866) 00:11:03.985 fused_ordering(867) 00:11:03.985 fused_ordering(868) 00:11:03.985 fused_ordering(869) 00:11:03.985 fused_ordering(870) 00:11:03.985 fused_ordering(871) 00:11:03.985 fused_ordering(872) 00:11:03.985 fused_ordering(873) 00:11:03.985 fused_ordering(874) 00:11:03.985 fused_ordering(875) 00:11:03.985 fused_ordering(876) 00:11:03.985 fused_ordering(877) 00:11:03.985 fused_ordering(878) 00:11:03.985 fused_ordering(879) 00:11:03.985 fused_ordering(880) 00:11:03.985 fused_ordering(881) 00:11:03.985 fused_ordering(882) 00:11:03.985 fused_ordering(883) 00:11:03.985 fused_ordering(884) 00:11:03.985 fused_ordering(885) 00:11:03.985 fused_ordering(886) 00:11:03.985 fused_ordering(887) 00:11:03.985 fused_ordering(888) 00:11:03.985 fused_ordering(889) 00:11:03.985 fused_ordering(890) 00:11:03.985 fused_ordering(891) 00:11:03.985 fused_ordering(892) 00:11:03.985 fused_ordering(893) 00:11:03.985 fused_ordering(894) 00:11:03.985 fused_ordering(895) 00:11:03.986 fused_ordering(896) 00:11:03.986 fused_ordering(897) 00:11:03.986 fused_ordering(898) 00:11:03.986 fused_ordering(899) 00:11:03.986 fused_ordering(900) 00:11:03.986 fused_ordering(901) 00:11:03.986 fused_ordering(902) 00:11:03.986 fused_ordering(903) 00:11:03.986 fused_ordering(904) 00:11:03.986 fused_ordering(905) 00:11:03.986 fused_ordering(906) 00:11:03.986 fused_ordering(907) 00:11:03.986 fused_ordering(908) 00:11:03.986 fused_ordering(909) 00:11:03.986 fused_ordering(910) 00:11:03.986 fused_ordering(911) 00:11:03.986 fused_ordering(912) 00:11:03.986 fused_ordering(913) 00:11:03.986 fused_ordering(914) 00:11:03.986 fused_ordering(915) 00:11:03.986 fused_ordering(916) 00:11:03.986 fused_ordering(917) 00:11:03.986 fused_ordering(918) 00:11:03.986 fused_ordering(919) 00:11:03.986 fused_ordering(920) 00:11:03.986 fused_ordering(921) 00:11:03.986 fused_ordering(922) 00:11:03.986 fused_ordering(923) 00:11:03.986 fused_ordering(924) 00:11:03.986 fused_ordering(925) 00:11:03.986 fused_ordering(926) 00:11:03.986 fused_ordering(927) 00:11:03.986 fused_ordering(928) 00:11:03.986 fused_ordering(929) 00:11:03.986 fused_ordering(930) 00:11:03.986 fused_ordering(931) 00:11:03.986 fused_ordering(932) 00:11:03.986 fused_ordering(933) 00:11:03.986 fused_ordering(934) 00:11:03.986 fused_ordering(935) 00:11:03.986 fused_ordering(936) 00:11:03.986 fused_ordering(937) 00:11:03.986 fused_ordering(938) 00:11:03.986 fused_ordering(939) 00:11:03.986 fused_ordering(940) 00:11:03.986 fused_ordering(941) 00:11:03.986 fused_ordering(942) 00:11:03.986 fused_ordering(943) 00:11:03.986 fused_ordering(944) 00:11:03.986 fused_ordering(945) 00:11:03.986 fused_ordering(946) 00:11:03.986 fused_ordering(947) 00:11:03.986 fused_ordering(948) 00:11:03.986 fused_ordering(949) 00:11:03.986 fused_ordering(950) 00:11:03.986 fused_ordering(951) 00:11:03.986 fused_ordering(952) 00:11:03.986 fused_ordering(953) 00:11:03.986 fused_ordering(954) 00:11:03.986 fused_ordering(955) 00:11:03.986 fused_ordering(956) 00:11:03.986 fused_ordering(957) 00:11:03.986 fused_ordering(958) 00:11:03.986 fused_ordering(959) 00:11:03.986 fused_ordering(960) 00:11:03.986 fused_ordering(961) 00:11:03.986 fused_ordering(962) 00:11:03.986 fused_ordering(963) 00:11:03.986 fused_ordering(964) 00:11:03.986 fused_ordering(965) 00:11:03.986 fused_ordering(966) 00:11:03.986 fused_ordering(967) 00:11:03.986 fused_ordering(968) 00:11:03.986 fused_ordering(969) 00:11:03.986 fused_ordering(970) 00:11:03.986 fused_ordering(971) 00:11:03.986 fused_ordering(972) 00:11:03.986 fused_ordering(973) 00:11:03.986 fused_ordering(974) 00:11:03.986 fused_ordering(975) 00:11:03.986 fused_ordering(976) 00:11:03.986 fused_ordering(977) 00:11:03.986 fused_ordering(978) 00:11:03.986 fused_ordering(979) 00:11:03.986 fused_ordering(980) 00:11:03.986 fused_ordering(981) 00:11:03.986 fused_ordering(982) 00:11:03.986 fused_ordering(983) 00:11:03.986 fused_ordering(984) 00:11:03.986 fused_ordering(985) 00:11:03.986 fused_ordering(986) 00:11:03.986 fused_ordering(987) 00:11:03.986 fused_ordering(988) 00:11:03.986 fused_ordering(989) 00:11:03.986 fused_ordering(990) 00:11:03.986 fused_ordering(991) 00:11:03.986 fused_ordering(992) 00:11:03.986 fused_ordering(993) 00:11:03.986 fused_ordering(994) 00:11:03.986 fused_ordering(995) 00:11:03.986 fused_ordering(996) 00:11:03.986 fused_ordering(997) 00:11:03.986 fused_ordering(998) 00:11:03.986 fused_ordering(999) 00:11:03.986 fused_ordering(1000) 00:11:03.986 fused_ordering(1001) 00:11:03.986 fused_ordering(1002) 00:11:03.986 fused_ordering(1003) 00:11:03.986 fused_ordering(1004) 00:11:03.986 fused_ordering(1005) 00:11:03.986 fused_ordering(1006) 00:11:03.986 fused_ordering(1007) 00:11:03.986 fused_ordering(1008) 00:11:03.986 fused_ordering(1009) 00:11:03.986 fused_ordering(1010) 00:11:03.986 fused_ordering(1011) 00:11:03.986 fused_ordering(1012) 00:11:03.986 fused_ordering(1013) 00:11:03.986 fused_ordering(1014) 00:11:03.986 fused_ordering(1015) 00:11:03.986 fused_ordering(1016) 00:11:03.986 fused_ordering(1017) 00:11:03.986 fused_ordering(1018) 00:11:03.986 fused_ordering(1019) 00:11:03.986 fused_ordering(1020) 00:11:03.986 fused_ordering(1021) 00:11:03.986 fused_ordering(1022) 00:11:03.986 fused_ordering(1023) 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:03.986 rmmod nvme_tcp 00:11:03.986 rmmod nvme_fabrics 00:11:03.986 rmmod nvme_keyring 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 531661 ']' 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 531661 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 531661 ']' 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 531661 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 531661 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 531661' 00:11:03.986 killing process with pid 531661 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 531661 00:11:03.986 13:40:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 531661 00:11:04.244 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.244 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.245 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.245 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.245 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.245 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.245 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.245 13:40:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.153 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:06.153 00:11:06.153 real 0m7.566s 00:11:06.153 user 0m5.135s 00:11:06.153 sys 0m3.200s 00:11:06.153 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:06.153 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.153 ************************************ 00:11:06.153 END TEST nvmf_fused_ordering 00:11:06.153 ************************************ 00:11:06.153 13:40:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:06.153 13:40:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:06.153 13:40:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:06.153 13:40:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.153 ************************************ 00:11:06.153 START TEST nvmf_ns_masking 00:11:06.153 ************************************ 00:11:06.153 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:06.413 * Looking for test storage... 00:11:06.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.413 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=073268e2-7283-462c-b57d-3e46033cf47e 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c463e42f-bb16-4e05-90b1-739a87effc2e 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=2fb0f499-ecd7-4a49-9c52-157620f6c428 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:06.414 13:40:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:08.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:08.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:08.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:08.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.317 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.318 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.318 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.318 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.318 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:11:08.576 00:11:08.576 --- 10.0.0.2 ping statistics --- 00:11:08.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.576 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:11:08.576 00:11:08.576 --- 10.0.0.1 ping statistics --- 00:11:08.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.576 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=533998 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 533998 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 533998 ']' 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.576 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.577 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.577 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.577 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:08.577 [2024-07-25 13:40:05.455442] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:08.577 [2024-07-25 13:40:05.455526] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.577 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.577 [2024-07-25 13:40:05.517121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.835 [2024-07-25 13:40:05.620342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.835 [2024-07-25 13:40:05.620398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.835 [2024-07-25 13:40:05.620427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.835 [2024-07-25 13:40:05.620438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.835 [2024-07-25 13:40:05.620447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.835 [2024-07-25 13:40:05.620479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.835 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.835 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:08.835 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.835 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.835 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:08.835 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.835 13:40:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:09.093 [2024-07-25 13:40:05.983326] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.093 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:09.093 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:09.093 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:09.351 Malloc1 00:11:09.351 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:09.608 Malloc2 00:11:09.608 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:09.866 13:40:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:10.124 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.382 [2024-07-25 13:40:07.247973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.382 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:10.382 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2fb0f499-ecd7-4a49-9c52-157620f6c428 -a 10.0.0.2 -s 4420 -i 4 00:11:10.382 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:10.382 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:10.382 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.382 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:10.382 13:40:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:12.918 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:12.918 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.919 [ 0]:0x1 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=932b3e2fb9194853bf7ca15acc01cdd2 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 932b3e2fb9194853bf7ca15acc01cdd2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:12.919 [ 0]:0x1 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=932b3e2fb9194853bf7ca15acc01cdd2 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 932b3e2fb9194853bf7ca15acc01cdd2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:12.919 [ 1]:0x2 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a1a2e7053704059a5af9d9df0b10a4d 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a1a2e7053704059a5af9d9df0b10a4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:12.919 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.177 13:40:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.177 13:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:13.745 13:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:13.745 13:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2fb0f499-ecd7-4a49-9c52-157620f6c428 -a 10.0.0.2 -s 4420 -i 4 00:11:13.745 13:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:13.745 13:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.745 13:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.745 13:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:13.745 13:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:13.745 13:40:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:16.277 [ 0]:0x2 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a1a2e7053704059a5af9d9df0b10a4d 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a1a2e7053704059a5af9d9df0b10a4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.277 13:40:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:16.277 [ 0]:0x1 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=932b3e2fb9194853bf7ca15acc01cdd2 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 932b3e2fb9194853bf7ca15acc01cdd2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:16.277 [ 1]:0x2 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.277 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a1a2e7053704059a5af9d9df0b10a4d 00:11:16.278 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a1a2e7053704059a5af9d9df0b10a4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.278 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:16.535 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:16.535 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:16.535 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:16.535 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:16.793 [ 0]:0x2 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a1a2e7053704059a5af9d9df0b10a4d 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a1a2e7053704059a5af9d9df0b10a4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.793 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:17.053 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:17.053 13:40:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2fb0f499-ecd7-4a49-9c52-157620f6c428 -a 10.0.0.2 -s 4420 -i 4 00:11:17.313 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:17.313 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:17.313 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.313 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:17.313 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:17.313 13:40:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:19.219 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:19.219 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:19.219 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.219 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:19.219 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.219 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:19.219 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:19.219 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:19.477 [ 0]:0x1 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=932b3e2fb9194853bf7ca15acc01cdd2 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 932b3e2fb9194853bf7ca15acc01cdd2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:19.477 [ 1]:0x2 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:19.477 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:19.734 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a1a2e7053704059a5af9d9df0b10a4d 00:11:19.734 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a1a2e7053704059a5af9d9df0b10a4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:19.734 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:19.991 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:19.992 [ 0]:0x2 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a1a2e7053704059a5af9d9df0b10a4d 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a1a2e7053704059a5af9d9df0b10a4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:19.992 13:40:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:20.301 [2024-07-25 13:40:17.149635] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:20.301 request: 00:11:20.301 { 00:11:20.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:20.301 "nsid": 2, 00:11:20.301 "host": "nqn.2016-06.io.spdk:host1", 00:11:20.301 "method": "nvmf_ns_remove_host", 00:11:20.301 "req_id": 1 00:11:20.301 } 00:11:20.301 Got JSON-RPC error response 00:11:20.301 response: 00:11:20.301 { 00:11:20.301 "code": -32602, 00:11:20.301 "message": "Invalid parameters" 00:11:20.301 } 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:20.301 [ 0]:0x2 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a1a2e7053704059a5af9d9df0b10a4d 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a1a2e7053704059a5af9d9df0b10a4d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=535575 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 535575 /var/tmp/host.sock 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 535575 ']' 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:20.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:20.301 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.302 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:20.302 [2024-07-25 13:40:17.333896] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:20.302 [2024-07-25 13:40:17.333971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid535575 ] 00:11:20.560 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.560 [2024-07-25 13:40:17.393385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.560 [2024-07-25 13:40:17.499383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.819 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.819 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:20.819 13:40:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.076 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:21.334 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 073268e2-7283-462c-b57d-3e46033cf47e 00:11:21.334 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:21.334 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 073268E27283462CB57D3E46033CF47E -i 00:11:21.623 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c463e42f-bb16-4e05-90b1-739a87effc2e 00:11:21.623 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:21.623 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C463E42FBB164E0590B1739A87EFFC2E -i 00:11:21.906 13:40:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:22.164 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:22.422 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:22.422 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:22.988 nvme0n1 00:11:22.988 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:22.988 13:40:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:23.246 nvme1n2 00:11:23.246 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:23.246 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:23.246 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:23.246 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:23.246 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:23.503 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:23.503 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:23.503 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:23.503 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:23.761 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 073268e2-7283-462c-b57d-3e46033cf47e == \0\7\3\2\6\8\e\2\-\7\2\8\3\-\4\6\2\c\-\b\5\7\d\-\3\e\4\6\0\3\3\c\f\4\7\e ]] 00:11:23.761 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:23.761 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:23.761 13:40:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c463e42f-bb16-4e05-90b1-739a87effc2e == \c\4\6\3\e\4\2\f\-\b\b\1\6\-\4\e\0\5\-\9\0\b\1\-\7\3\9\a\8\7\e\f\f\c\2\e ]] 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 535575 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 535575 ']' 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 535575 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 535575 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 535575' 00:11:24.019 killing process with pid 535575 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 535575 00:11:24.019 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 535575 00:11:24.586 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:24.845 rmmod nvme_tcp 00:11:24.845 rmmod nvme_fabrics 00:11:24.845 rmmod nvme_keyring 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 533998 ']' 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 533998 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 533998 ']' 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 533998 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 533998 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 533998' 00:11:24.845 killing process with pid 533998 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 533998 00:11:24.845 13:40:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 533998 00:11:25.413 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.413 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.413 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.413 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.413 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.413 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.413 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.413 13:40:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.316 00:11:27.316 real 0m21.049s 00:11:27.316 user 0m27.456s 00:11:27.316 sys 0m4.107s 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:27.316 ************************************ 00:11:27.316 END TEST nvmf_ns_masking 00:11:27.316 ************************************ 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.316 ************************************ 00:11:27.316 START TEST nvmf_nvme_cli 00:11:27.316 ************************************ 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:27.316 * Looking for test storage... 00:11:27.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.316 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.317 13:40:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:29.851 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:29.851 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:29.851 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.851 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:29.851 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:11:29.852 00:11:29.852 --- 10.0.0.2 ping statistics --- 00:11:29.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.852 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:11:29.852 00:11:29.852 --- 10.0.0.1 ping statistics --- 00:11:29.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.852 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=538108 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 538108 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 538108 ']' 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.852 13:40:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:29.852 [2024-07-25 13:40:26.499465] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:29.852 [2024-07-25 13:40:26.499550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.852 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.852 [2024-07-25 13:40:26.561933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.852 [2024-07-25 13:40:26.672327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.852 [2024-07-25 13:40:26.672391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.852 [2024-07-25 13:40:26.672406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.852 [2024-07-25 13:40:26.672418] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.852 [2024-07-25 13:40:26.672428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.852 [2024-07-25 13:40:26.672477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.852 [2024-07-25 13:40:26.672547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.852 [2024-07-25 13:40:26.672605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.852 [2024-07-25 13:40:26.672608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.789 [2024-07-25 13:40:27.505811] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.789 Malloc0 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.789 Malloc1 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.789 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.790 [2024-07-25 13:40:27.588573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:30.790 00:11:30.790 Discovery Log Number of Records 2, Generation counter 2 00:11:30.790 =====Discovery Log Entry 0====== 00:11:30.790 trtype: tcp 00:11:30.790 adrfam: ipv4 00:11:30.790 subtype: current discovery subsystem 00:11:30.790 treq: not required 00:11:30.790 portid: 0 00:11:30.790 trsvcid: 4420 00:11:30.790 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:30.790 traddr: 10.0.0.2 00:11:30.790 eflags: explicit discovery connections, duplicate discovery information 00:11:30.790 sectype: none 00:11:30.790 =====Discovery Log Entry 1====== 00:11:30.790 trtype: tcp 00:11:30.790 adrfam: ipv4 00:11:30.790 subtype: nvme subsystem 00:11:30.790 treq: not required 00:11:30.790 portid: 0 00:11:30.790 trsvcid: 4420 00:11:30.790 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:30.790 traddr: 10.0.0.2 00:11:30.790 eflags: none 00:11:30.790 sectype: none 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:30.790 13:40:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.724 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:31.724 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:31.724 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.724 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:31.724 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:31.724 13:40:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.628 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:33.629 /dev/nvme0n1 ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.629 rmmod nvme_tcp 00:11:33.629 rmmod nvme_fabrics 00:11:33.629 rmmod nvme_keyring 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 538108 ']' 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 538108 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 538108 ']' 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 538108 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 538108 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 538108' 00:11:33.629 killing process with pid 538108 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 538108 00:11:33.629 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 538108 00:11:34.197 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.197 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:34.197 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:34.197 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:34.197 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:34.198 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.198 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.198 13:40:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:36.101 00:11:36.101 real 0m8.758s 00:11:36.101 user 0m17.685s 00:11:36.101 sys 0m2.196s 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:36.101 ************************************ 00:11:36.101 END TEST nvmf_nvme_cli 00:11:36.101 ************************************ 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.101 ************************************ 00:11:36.101 START TEST nvmf_vfio_user 00:11:36.101 ************************************ 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:36.101 * Looking for test storage... 00:11:36.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.101 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=538986 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 538986' 00:11:36.361 Process pid: 538986 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 538986 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 538986 ']' 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.361 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:36.361 [2024-07-25 13:40:33.183863] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:36.361 [2024-07-25 13:40:33.183945] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.361 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.361 [2024-07-25 13:40:33.242603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.361 [2024-07-25 13:40:33.351474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.361 [2024-07-25 13:40:33.351529] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.361 [2024-07-25 13:40:33.351557] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.361 [2024-07-25 13:40:33.351569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.361 [2024-07-25 13:40:33.351579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.361 [2024-07-25 13:40:33.351631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.361 [2024-07-25 13:40:33.351690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.361 [2024-07-25 13:40:33.351755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.361 [2024-07-25 13:40:33.351758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.620 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.620 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:11:36.620 13:40:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:37.551 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:37.809 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:37.809 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:37.809 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:37.809 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:37.809 13:40:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:38.066 Malloc1 00:11:38.066 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:38.324 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:38.581 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:38.839 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:38.839 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:38.839 13:40:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:39.096 Malloc2 00:11:39.096 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:39.353 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:39.611 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:39.869 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:39.869 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:39.869 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:39.869 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:39.869 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:39.869 13:40:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:39.869 [2024-07-25 13:40:36.868675] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:39.869 [2024-07-25 13:40:36.868714] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid539460 ] 00:11:39.869 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.869 [2024-07-25 13:40:36.900243] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:40.130 [2024-07-25 13:40:36.909495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:40.130 [2024-07-25 13:40:36.909525] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff6982bf000 00:11:40.130 [2024-07-25 13:40:36.910483] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.130 [2024-07-25 13:40:36.911478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.130 [2024-07-25 13:40:36.912482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.130 [2024-07-25 13:40:36.913487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:40.130 [2024-07-25 13:40:36.914489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:40.131 [2024-07-25 13:40:36.915489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.131 [2024-07-25 13:40:36.916498] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:40.131 [2024-07-25 13:40:36.917502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:40.131 [2024-07-25 13:40:36.918515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:40.131 [2024-07-25 13:40:36.918534] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff6982b4000 00:11:40.131 [2024-07-25 13:40:36.919651] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:40.131 [2024-07-25 13:40:36.935261] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:40.131 [2024-07-25 13:40:36.935298] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:40.131 [2024-07-25 13:40:36.937620] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:40.131 [2024-07-25 13:40:36.937677] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:40.131 [2024-07-25 13:40:36.937777] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:40.131 [2024-07-25 13:40:36.937810] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:40.131 [2024-07-25 13:40:36.937821] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:40.131 [2024-07-25 13:40:36.938611] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:40.131 [2024-07-25 13:40:36.938637] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:40.131 [2024-07-25 13:40:36.938651] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:40.131 [2024-07-25 13:40:36.939615] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:40.131 [2024-07-25 13:40:36.939635] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:40.131 [2024-07-25 13:40:36.939649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:40.131 [2024-07-25 13:40:36.940618] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:40.131 [2024-07-25 13:40:36.940637] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:40.131 [2024-07-25 13:40:36.941623] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:40.131 [2024-07-25 13:40:36.941642] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:40.131 [2024-07-25 13:40:36.941651] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:40.131 [2024-07-25 13:40:36.941662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:40.131 [2024-07-25 13:40:36.941772] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:40.131 [2024-07-25 13:40:36.941780] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:40.131 [2024-07-25 13:40:36.941789] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:40.131 [2024-07-25 13:40:36.946071] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:40.131 [2024-07-25 13:40:36.946656] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:40.131 [2024-07-25 13:40:36.947661] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:40.131 [2024-07-25 13:40:36.948653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:40.131 [2024-07-25 13:40:36.948772] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:40.131 [2024-07-25 13:40:36.949676] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:40.131 [2024-07-25 13:40:36.949694] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:40.131 [2024-07-25 13:40:36.949703] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.949727] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:40.131 [2024-07-25 13:40:36.949741] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.949771] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:40.131 [2024-07-25 13:40:36.949780] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:40.131 [2024-07-25 13:40:36.949787] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:40.131 [2024-07-25 13:40:36.949809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:40.131 [2024-07-25 13:40:36.949870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:40.131 [2024-07-25 13:40:36.949890] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:40.131 [2024-07-25 13:40:36.949898] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:40.131 [2024-07-25 13:40:36.949905] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:40.131 [2024-07-25 13:40:36.949912] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:40.131 [2024-07-25 13:40:36.949921] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:40.131 [2024-07-25 13:40:36.949929] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:40.131 [2024-07-25 13:40:36.949936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.949950] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.949969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:40.131 [2024-07-25 13:40:36.949984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:40.131 [2024-07-25 13:40:36.950006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.131 [2024-07-25 13:40:36.950019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.131 [2024-07-25 13:40:36.950030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.131 [2024-07-25 13:40:36.950071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.131 [2024-07-25 13:40:36.950083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.950100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.950115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:40.131 [2024-07-25 13:40:36.950127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:40.131 [2024-07-25 13:40:36.950139] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:40.131 [2024-07-25 13:40:36.950147] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.950163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.950175] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.950188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:40.131 [2024-07-25 13:40:36.950200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:40.131 [2024-07-25 13:40:36.950267] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:40.131 [2024-07-25 13:40:36.950283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950298] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:40.132 [2024-07-25 13:40:36.950307] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:40.132 [2024-07-25 13:40:36.950313] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:40.132 [2024-07-25 13:40:36.950322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.950358] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:40.132 [2024-07-25 13:40:36.950391] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950407] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950419] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:40.132 [2024-07-25 13:40:36.950427] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:40.132 [2024-07-25 13:40:36.950433] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:40.132 [2024-07-25 13:40:36.950442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.950498] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950514] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950526] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:40.132 [2024-07-25 13:40:36.950534] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:40.132 [2024-07-25 13:40:36.950540] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:40.132 [2024-07-25 13:40:36.950549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.950577] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950589] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950617] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950626] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950644] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:40.132 [2024-07-25 13:40:36.950652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:40.132 [2024-07-25 13:40:36.950660] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:40.132 [2024-07-25 13:40:36.950690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.950726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.950753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.950780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.950813] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:40.132 [2024-07-25 13:40:36.950826] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:40.132 [2024-07-25 13:40:36.950833] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:40.132 [2024-07-25 13:40:36.950839] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:40.132 [2024-07-25 13:40:36.950845] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:11:40.132 [2024-07-25 13:40:36.950854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:40.132 [2024-07-25 13:40:36.950866] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:40.132 [2024-07-25 13:40:36.950874] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:40.132 [2024-07-25 13:40:36.950879] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:40.132 [2024-07-25 13:40:36.950888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950899] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:40.132 [2024-07-25 13:40:36.950907] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:40.132 [2024-07-25 13:40:36.950913] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:40.132 [2024-07-25 13:40:36.950921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950934] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:40.132 [2024-07-25 13:40:36.950941] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:40.132 [2024-07-25 13:40:36.950947] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:40.132 [2024-07-25 13:40:36.950956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:40.132 [2024-07-25 13:40:36.950967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.950986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.951005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:40.132 [2024-07-25 13:40:36.951017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:40.132 ===================================================== 00:11:40.132 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:40.132 ===================================================== 00:11:40.132 Controller Capabilities/Features 00:11:40.132 ================================ 00:11:40.132 Vendor ID: 4e58 00:11:40.132 Subsystem Vendor ID: 4e58 00:11:40.132 Serial Number: SPDK1 00:11:40.132 Model Number: SPDK bdev Controller 00:11:40.132 Firmware Version: 24.09 00:11:40.132 Recommended Arb Burst: 6 00:11:40.132 IEEE OUI Identifier: 8d 6b 50 00:11:40.132 Multi-path I/O 00:11:40.132 May have multiple subsystem ports: Yes 00:11:40.132 May have multiple controllers: Yes 00:11:40.132 Associated with SR-IOV VF: No 00:11:40.132 Max Data Transfer Size: 131072 00:11:40.132 Max Number of Namespaces: 32 00:11:40.132 Max Number of I/O Queues: 127 00:11:40.133 NVMe Specification Version (VS): 1.3 00:11:40.133 NVMe Specification Version (Identify): 1.3 00:11:40.133 Maximum Queue Entries: 256 00:11:40.133 Contiguous Queues Required: Yes 00:11:40.133 Arbitration Mechanisms Supported 00:11:40.133 Weighted Round Robin: Not Supported 00:11:40.133 Vendor Specific: Not Supported 00:11:40.133 Reset Timeout: 15000 ms 00:11:40.133 Doorbell Stride: 4 bytes 00:11:40.133 NVM Subsystem Reset: Not Supported 00:11:40.133 Command Sets Supported 00:11:40.133 NVM Command Set: Supported 00:11:40.133 Boot Partition: Not Supported 00:11:40.133 Memory Page Size Minimum: 4096 bytes 00:11:40.133 Memory Page Size Maximum: 4096 bytes 00:11:40.133 Persistent Memory Region: Not Supported 00:11:40.133 Optional Asynchronous Events Supported 00:11:40.133 Namespace Attribute Notices: Supported 00:11:40.133 Firmware Activation Notices: Not Supported 00:11:40.133 ANA Change Notices: Not Supported 00:11:40.133 PLE Aggregate Log Change Notices: Not Supported 00:11:40.133 LBA Status Info Alert Notices: Not Supported 00:11:40.133 EGE Aggregate Log Change Notices: Not Supported 00:11:40.133 Normal NVM Subsystem Shutdown event: Not Supported 00:11:40.133 Zone Descriptor Change Notices: Not Supported 00:11:40.133 Discovery Log Change Notices: Not Supported 00:11:40.133 Controller Attributes 00:11:40.133 128-bit Host Identifier: Supported 00:11:40.133 Non-Operational Permissive Mode: Not Supported 00:11:40.133 NVM Sets: Not Supported 00:11:40.133 Read Recovery Levels: Not Supported 00:11:40.133 Endurance Groups: Not Supported 00:11:40.133 Predictable Latency Mode: Not Supported 00:11:40.133 Traffic Based Keep ALive: Not Supported 00:11:40.133 Namespace Granularity: Not Supported 00:11:40.133 SQ Associations: Not Supported 00:11:40.133 UUID List: Not Supported 00:11:40.133 Multi-Domain Subsystem: Not Supported 00:11:40.133 Fixed Capacity Management: Not Supported 00:11:40.133 Variable Capacity Management: Not Supported 00:11:40.133 Delete Endurance Group: Not Supported 00:11:40.133 Delete NVM Set: Not Supported 00:11:40.133 Extended LBA Formats Supported: Not Supported 00:11:40.133 Flexible Data Placement Supported: Not Supported 00:11:40.133 00:11:40.133 Controller Memory Buffer Support 00:11:40.133 ================================ 00:11:40.133 Supported: No 00:11:40.133 00:11:40.133 Persistent Memory Region Support 00:11:40.133 ================================ 00:11:40.133 Supported: No 00:11:40.133 00:11:40.133 Admin Command Set Attributes 00:11:40.133 ============================ 00:11:40.133 Security Send/Receive: Not Supported 00:11:40.133 Format NVM: Not Supported 00:11:40.133 Firmware Activate/Download: Not Supported 00:11:40.133 Namespace Management: Not Supported 00:11:40.133 Device Self-Test: Not Supported 00:11:40.133 Directives: Not Supported 00:11:40.133 NVMe-MI: Not Supported 00:11:40.133 Virtualization Management: Not Supported 00:11:40.133 Doorbell Buffer Config: Not Supported 00:11:40.133 Get LBA Status Capability: Not Supported 00:11:40.133 Command & Feature Lockdown Capability: Not Supported 00:11:40.133 Abort Command Limit: 4 00:11:40.133 Async Event Request Limit: 4 00:11:40.133 Number of Firmware Slots: N/A 00:11:40.133 Firmware Slot 1 Read-Only: N/A 00:11:40.133 Firmware Activation Without Reset: N/A 00:11:40.133 Multiple Update Detection Support: N/A 00:11:40.133 Firmware Update Granularity: No Information Provided 00:11:40.133 Per-Namespace SMART Log: No 00:11:40.133 Asymmetric Namespace Access Log Page: Not Supported 00:11:40.133 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:40.133 Command Effects Log Page: Supported 00:11:40.133 Get Log Page Extended Data: Supported 00:11:40.133 Telemetry Log Pages: Not Supported 00:11:40.133 Persistent Event Log Pages: Not Supported 00:11:40.133 Supported Log Pages Log Page: May Support 00:11:40.133 Commands Supported & Effects Log Page: Not Supported 00:11:40.133 Feature Identifiers & Effects Log Page:May Support 00:11:40.133 NVMe-MI Commands & Effects Log Page: May Support 00:11:40.133 Data Area 4 for Telemetry Log: Not Supported 00:11:40.133 Error Log Page Entries Supported: 128 00:11:40.133 Keep Alive: Supported 00:11:40.133 Keep Alive Granularity: 10000 ms 00:11:40.133 00:11:40.133 NVM Command Set Attributes 00:11:40.133 ========================== 00:11:40.133 Submission Queue Entry Size 00:11:40.133 Max: 64 00:11:40.133 Min: 64 00:11:40.133 Completion Queue Entry Size 00:11:40.133 Max: 16 00:11:40.133 Min: 16 00:11:40.133 Number of Namespaces: 32 00:11:40.133 Compare Command: Supported 00:11:40.133 Write Uncorrectable Command: Not Supported 00:11:40.133 Dataset Management Command: Supported 00:11:40.133 Write Zeroes Command: Supported 00:11:40.133 Set Features Save Field: Not Supported 00:11:40.133 Reservations: Not Supported 00:11:40.133 Timestamp: Not Supported 00:11:40.133 Copy: Supported 00:11:40.133 Volatile Write Cache: Present 00:11:40.133 Atomic Write Unit (Normal): 1 00:11:40.133 Atomic Write Unit (PFail): 1 00:11:40.133 Atomic Compare & Write Unit: 1 00:11:40.133 Fused Compare & Write: Supported 00:11:40.133 Scatter-Gather List 00:11:40.133 SGL Command Set: Supported (Dword aligned) 00:11:40.133 SGL Keyed: Not Supported 00:11:40.133 SGL Bit Bucket Descriptor: Not Supported 00:11:40.133 SGL Metadata Pointer: Not Supported 00:11:40.133 Oversized SGL: Not Supported 00:11:40.133 SGL Metadata Address: Not Supported 00:11:40.133 SGL Offset: Not Supported 00:11:40.133 Transport SGL Data Block: Not Supported 00:11:40.133 Replay Protected Memory Block: Not Supported 00:11:40.133 00:11:40.133 Firmware Slot Information 00:11:40.133 ========================= 00:11:40.133 Active slot: 1 00:11:40.133 Slot 1 Firmware Revision: 24.09 00:11:40.133 00:11:40.133 00:11:40.133 Commands Supported and Effects 00:11:40.133 ============================== 00:11:40.133 Admin Commands 00:11:40.133 -------------- 00:11:40.133 Get Log Page (02h): Supported 00:11:40.133 Identify (06h): Supported 00:11:40.133 Abort (08h): Supported 00:11:40.133 Set Features (09h): Supported 00:11:40.133 Get Features (0Ah): Supported 00:11:40.133 Asynchronous Event Request (0Ch): Supported 00:11:40.133 Keep Alive (18h): Supported 00:11:40.133 I/O Commands 00:11:40.133 ------------ 00:11:40.133 Flush (00h): Supported LBA-Change 00:11:40.133 Write (01h): Supported LBA-Change 00:11:40.133 Read (02h): Supported 00:11:40.133 Compare (05h): Supported 00:11:40.133 Write Zeroes (08h): Supported LBA-Change 00:11:40.133 Dataset Management (09h): Supported LBA-Change 00:11:40.133 Copy (19h): Supported LBA-Change 00:11:40.133 00:11:40.133 Error Log 00:11:40.133 ========= 00:11:40.133 00:11:40.133 Arbitration 00:11:40.133 =========== 00:11:40.133 Arbitration Burst: 1 00:11:40.133 00:11:40.133 Power Management 00:11:40.133 ================ 00:11:40.133 Number of Power States: 1 00:11:40.133 Current Power State: Power State #0 00:11:40.133 Power State #0: 00:11:40.133 Max Power: 0.00 W 00:11:40.133 Non-Operational State: Operational 00:11:40.133 Entry Latency: Not Reported 00:11:40.134 Exit Latency: Not Reported 00:11:40.134 Relative Read Throughput: 0 00:11:40.134 Relative Read Latency: 0 00:11:40.134 Relative Write Throughput: 0 00:11:40.134 Relative Write Latency: 0 00:11:40.134 Idle Power: Not Reported 00:11:40.134 Active Power: Not Reported 00:11:40.134 Non-Operational Permissive Mode: Not Supported 00:11:40.134 00:11:40.134 Health Information 00:11:40.134 ================== 00:11:40.134 Critical Warnings: 00:11:40.134 Available Spare Space: OK 00:11:40.134 Temperature: OK 00:11:40.134 Device Reliability: OK 00:11:40.134 Read Only: No 00:11:40.134 Volatile Memory Backup: OK 00:11:40.134 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:40.134 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:40.134 Available Spare: 0% 00:11:40.134 Available Sp[2024-07-25 13:40:36.951170] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:40.134 [2024-07-25 13:40:36.951187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:40.134 [2024-07-25 13:40:36.951233] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:40.134 [2024-07-25 13:40:36.951252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.134 [2024-07-25 13:40:36.951263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.134 [2024-07-25 13:40:36.951273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.134 [2024-07-25 13:40:36.951283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.134 [2024-07-25 13:40:36.951686] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:40.134 [2024-07-25 13:40:36.951736] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:40.134 [2024-07-25 13:40:36.952681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:40.134 [2024-07-25 13:40:36.952760] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:40.134 [2024-07-25 13:40:36.952774] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:40.134 [2024-07-25 13:40:36.953692] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:40.134 [2024-07-25 13:40:36.953716] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:40.134 [2024-07-25 13:40:36.953775] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:40.134 [2024-07-25 13:40:36.955738] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:40.134 are Threshold: 0% 00:11:40.134 Life Percentage Used: 0% 00:11:40.134 Data Units Read: 0 00:11:40.134 Data Units Written: 0 00:11:40.134 Host Read Commands: 0 00:11:40.134 Host Write Commands: 0 00:11:40.134 Controller Busy Time: 0 minutes 00:11:40.134 Power Cycles: 0 00:11:40.134 Power On Hours: 0 hours 00:11:40.134 Unsafe Shutdowns: 0 00:11:40.134 Unrecoverable Media Errors: 0 00:11:40.134 Lifetime Error Log Entries: 0 00:11:40.134 Warning Temperature Time: 0 minutes 00:11:40.134 Critical Temperature Time: 0 minutes 00:11:40.134 00:11:40.134 Number of Queues 00:11:40.134 ================ 00:11:40.134 Number of I/O Submission Queues: 127 00:11:40.134 Number of I/O Completion Queues: 127 00:11:40.134 00:11:40.134 Active Namespaces 00:11:40.134 ================= 00:11:40.134 Namespace ID:1 00:11:40.134 Error Recovery Timeout: Unlimited 00:11:40.134 Command Set Identifier: NVM (00h) 00:11:40.134 Deallocate: Supported 00:11:40.134 Deallocated/Unwritten Error: Not Supported 00:11:40.134 Deallocated Read Value: Unknown 00:11:40.134 Deallocate in Write Zeroes: Not Supported 00:11:40.134 Deallocated Guard Field: 0xFFFF 00:11:40.134 Flush: Supported 00:11:40.134 Reservation: Supported 00:11:40.134 Namespace Sharing Capabilities: Multiple Controllers 00:11:40.134 Size (in LBAs): 131072 (0GiB) 00:11:40.134 Capacity (in LBAs): 131072 (0GiB) 00:11:40.134 Utilization (in LBAs): 131072 (0GiB) 00:11:40.134 NGUID: 2D1F10C6A7684D3EA661A3B2D98E2921 00:11:40.134 UUID: 2d1f10c6-a768-4d3e-a661-a3b2d98e2921 00:11:40.134 Thin Provisioning: Not Supported 00:11:40.134 Per-NS Atomic Units: Yes 00:11:40.134 Atomic Boundary Size (Normal): 0 00:11:40.134 Atomic Boundary Size (PFail): 0 00:11:40.134 Atomic Boundary Offset: 0 00:11:40.134 Maximum Single Source Range Length: 65535 00:11:40.134 Maximum Copy Length: 65535 00:11:40.134 Maximum Source Range Count: 1 00:11:40.134 NGUID/EUI64 Never Reused: No 00:11:40.134 Namespace Write Protected: No 00:11:40.134 Number of LBA Formats: 1 00:11:40.134 Current LBA Format: LBA Format #00 00:11:40.134 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:40.134 00:11:40.134 13:40:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:40.134 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.394 [2024-07-25 13:40:37.187918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:45.663 Initializing NVMe Controllers 00:11:45.663 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:45.663 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:45.663 Initialization complete. Launching workers. 00:11:45.663 ======================================================== 00:11:45.663 Latency(us) 00:11:45.663 Device Information : IOPS MiB/s Average min max 00:11:45.663 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33620.40 131.33 3808.42 1175.09 8290.18 00:11:45.663 ======================================================== 00:11:45.663 Total : 33620.40 131.33 3808.42 1175.09 8290.18 00:11:45.663 00:11:45.663 [2024-07-25 13:40:42.213647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:45.663 13:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:45.663 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.663 [2024-07-25 13:40:42.443674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:50.978 Initializing NVMe Controllers 00:11:50.978 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:50.978 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:50.978 Initialization complete. Launching workers. 00:11:50.978 ======================================================== 00:11:50.978 Latency(us) 00:11:50.978 Device Information : IOPS MiB/s Average min max 00:11:50.978 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8009.80 4986.46 15970.60 00:11:50.978 ======================================================== 00:11:50.978 Total : 16000.00 62.50 8009.80 4986.46 15970.60 00:11:50.978 00:11:50.978 [2024-07-25 13:40:47.479754] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:50.978 13:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:50.978 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.978 [2024-07-25 13:40:47.693811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:56.288 [2024-07-25 13:40:52.771493] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:56.288 Initializing NVMe Controllers 00:11:56.288 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:56.288 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:56.288 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:56.288 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:56.288 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:56.288 Initialization complete. Launching workers. 00:11:56.288 Starting thread on core 2 00:11:56.288 Starting thread on core 3 00:11:56.288 Starting thread on core 1 00:11:56.288 13:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:56.288 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.288 [2024-07-25 13:40:53.077530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:59.569 [2024-07-25 13:40:56.276321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:59.569 Initializing NVMe Controllers 00:11:59.569 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.569 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.569 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:59.569 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:59.569 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:59.569 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:59.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:59.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:59.569 Initialization complete. Launching workers. 00:11:59.569 Starting thread on core 1 with urgent priority queue 00:11:59.569 Starting thread on core 2 with urgent priority queue 00:11:59.569 Starting thread on core 3 with urgent priority queue 00:11:59.569 Starting thread on core 0 with urgent priority queue 00:11:59.569 SPDK bdev Controller (SPDK1 ) core 0: 4362.00 IO/s 22.93 secs/100000 ios 00:11:59.569 SPDK bdev Controller (SPDK1 ) core 1: 4705.00 IO/s 21.25 secs/100000 ios 00:11:59.569 SPDK bdev Controller (SPDK1 ) core 2: 5392.33 IO/s 18.54 secs/100000 ios 00:11:59.569 SPDK bdev Controller (SPDK1 ) core 3: 5057.67 IO/s 19.77 secs/100000 ios 00:11:59.569 ======================================================== 00:11:59.569 00:11:59.569 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:59.569 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.569 [2024-07-25 13:40:56.579632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:59.828 Initializing NVMe Controllers 00:11:59.828 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.828 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:59.828 Namespace ID: 1 size: 0GB 00:11:59.828 Initialization complete. 00:11:59.828 INFO: using host memory buffer for IO 00:11:59.828 Hello world! 00:11:59.828 [2024-07-25 13:40:56.613188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:59.828 13:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:59.828 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.087 [2024-07-25 13:40:56.900538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.025 Initializing NVMe Controllers 00:12:01.025 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.025 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.025 Initialization complete. Launching workers. 00:12:01.025 submit (in ns) avg, min, max = 8875.5, 3517.8, 4024978.9 00:12:01.025 complete (in ns) avg, min, max = 25193.2, 2064.4, 4016866.7 00:12:01.025 00:12:01.025 Submit histogram 00:12:01.025 ================ 00:12:01.025 Range in us Cumulative Count 00:12:01.025 3.508 - 3.532: 0.1760% ( 23) 00:12:01.025 3.532 - 3.556: 0.8263% ( 85) 00:12:01.025 3.556 - 3.579: 2.8613% ( 266) 00:12:01.025 3.579 - 3.603: 7.3139% ( 582) 00:12:01.025 3.603 - 3.627: 14.9721% ( 1001) 00:12:01.025 3.627 - 3.650: 25.1243% ( 1327) 00:12:01.025 3.650 - 3.674: 34.9705% ( 1287) 00:12:01.025 3.674 - 3.698: 43.4244% ( 1105) 00:12:01.025 3.698 - 3.721: 50.4858% ( 923) 00:12:01.025 3.721 - 3.745: 55.3286% ( 633) 00:12:01.025 3.745 - 3.769: 59.5670% ( 554) 00:12:01.025 3.769 - 3.793: 63.2928% ( 487) 00:12:01.025 3.793 - 3.816: 66.1617% ( 375) 00:12:01.025 3.816 - 3.840: 69.0919% ( 383) 00:12:01.025 3.840 - 3.864: 73.1620% ( 532) 00:12:01.025 3.864 - 3.887: 77.3315% ( 545) 00:12:01.025 3.887 - 3.911: 81.3863% ( 530) 00:12:01.025 3.911 - 3.935: 84.5536% ( 414) 00:12:01.025 3.935 - 3.959: 86.7187% ( 283) 00:12:01.025 3.959 - 3.982: 88.3100% ( 208) 00:12:01.025 3.982 - 4.006: 89.9625% ( 216) 00:12:01.025 4.006 - 4.030: 91.0336% ( 140) 00:12:01.025 4.030 - 4.053: 91.9593% ( 121) 00:12:01.025 4.053 - 4.077: 92.6172% ( 86) 00:12:01.025 4.077 - 4.101: 93.3211% ( 92) 00:12:01.025 4.101 - 4.124: 94.1167% ( 104) 00:12:01.025 4.124 - 4.148: 94.8588% ( 97) 00:12:01.025 4.148 - 4.172: 95.3408% ( 63) 00:12:01.025 4.172 - 4.196: 95.6469% ( 40) 00:12:01.025 4.196 - 4.219: 95.9758% ( 43) 00:12:01.025 4.219 - 4.243: 96.2053% ( 30) 00:12:01.025 4.243 - 4.267: 96.3201% ( 15) 00:12:01.025 4.267 - 4.290: 96.4578% ( 18) 00:12:01.025 4.290 - 4.314: 96.5649% ( 14) 00:12:01.025 4.314 - 4.338: 96.6797% ( 15) 00:12:01.025 4.338 - 4.361: 96.7944% ( 15) 00:12:01.025 4.361 - 4.385: 96.8403% ( 6) 00:12:01.025 4.385 - 4.409: 96.8939% ( 7) 00:12:01.025 4.409 - 4.433: 96.9704% ( 10) 00:12:01.025 4.433 - 4.456: 97.0316% ( 8) 00:12:01.025 4.456 - 4.480: 97.0469% ( 2) 00:12:01.025 4.480 - 4.504: 97.1234% ( 10) 00:12:01.025 4.504 - 4.527: 97.1464% ( 3) 00:12:01.025 4.527 - 4.551: 97.1540% ( 1) 00:12:01.025 4.551 - 4.575: 97.1770% ( 3) 00:12:01.025 4.575 - 4.599: 97.2076% ( 4) 00:12:01.025 4.599 - 4.622: 97.2152% ( 1) 00:12:01.025 4.622 - 4.646: 97.2229% ( 1) 00:12:01.025 4.646 - 4.670: 97.2688% ( 6) 00:12:01.025 4.670 - 4.693: 97.2764% ( 1) 00:12:01.025 4.693 - 4.717: 97.3147% ( 5) 00:12:01.025 4.717 - 4.741: 97.3529% ( 5) 00:12:01.025 4.741 - 4.764: 97.4294% ( 10) 00:12:01.025 4.764 - 4.788: 97.4753% ( 6) 00:12:01.025 4.788 - 4.812: 97.4830% ( 1) 00:12:01.025 4.812 - 4.836: 97.5365% ( 7) 00:12:01.025 4.836 - 4.859: 97.5901% ( 7) 00:12:01.025 4.859 - 4.883: 97.6130% ( 3) 00:12:01.025 4.883 - 4.907: 97.6589% ( 6) 00:12:01.025 4.907 - 4.930: 97.7048% ( 6) 00:12:01.025 4.930 - 4.954: 97.7431% ( 5) 00:12:01.025 4.954 - 4.978: 97.7737% ( 4) 00:12:01.026 5.001 - 5.025: 97.8120% ( 5) 00:12:01.026 5.025 - 5.049: 97.8502% ( 5) 00:12:01.026 5.049 - 5.073: 97.8808% ( 4) 00:12:01.026 5.073 - 5.096: 97.9038% ( 3) 00:12:01.026 5.096 - 5.120: 97.9267% ( 3) 00:12:01.026 5.120 - 5.144: 97.9344% ( 1) 00:12:01.026 5.144 - 5.167: 97.9650% ( 4) 00:12:01.026 5.167 - 5.191: 97.9726% ( 1) 00:12:01.026 5.191 - 5.215: 97.9803% ( 1) 00:12:01.026 5.215 - 5.239: 97.9956% ( 2) 00:12:01.026 5.239 - 5.262: 98.0032% ( 1) 00:12:01.026 5.286 - 5.310: 98.0185% ( 2) 00:12:01.026 5.310 - 5.333: 98.0262% ( 1) 00:12:01.026 5.381 - 5.404: 98.0338% ( 1) 00:12:01.026 5.404 - 5.428: 98.0568% ( 3) 00:12:01.026 5.523 - 5.547: 98.0644% ( 1) 00:12:01.026 5.618 - 5.641: 98.0797% ( 2) 00:12:01.026 5.641 - 5.665: 98.0874% ( 1) 00:12:01.026 5.760 - 5.784: 98.0950% ( 1) 00:12:01.026 5.950 - 5.973: 98.1027% ( 1) 00:12:01.026 6.044 - 6.068: 98.1103% ( 1) 00:12:01.026 6.258 - 6.305: 98.1256% ( 2) 00:12:01.026 6.353 - 6.400: 98.1333% ( 1) 00:12:01.026 6.590 - 6.637: 98.1409% ( 1) 00:12:01.026 6.827 - 6.874: 98.1486% ( 1) 00:12:01.026 6.874 - 6.921: 98.1639% ( 2) 00:12:01.026 6.969 - 7.016: 98.1715% ( 1) 00:12:01.026 7.443 - 7.490: 98.1792% ( 1) 00:12:01.026 7.490 - 7.538: 98.1868% ( 1) 00:12:01.026 7.585 - 7.633: 98.1945% ( 1) 00:12:01.026 7.633 - 7.680: 98.2021% ( 1) 00:12:01.026 7.727 - 7.775: 98.2174% ( 2) 00:12:01.026 7.775 - 7.822: 98.2251% ( 1) 00:12:01.026 7.822 - 7.870: 98.2404% ( 2) 00:12:01.026 7.870 - 7.917: 98.2557% ( 2) 00:12:01.026 7.917 - 7.964: 98.2633% ( 1) 00:12:01.026 7.964 - 8.012: 98.2710% ( 1) 00:12:01.026 8.107 - 8.154: 98.2786% ( 1) 00:12:01.026 8.154 - 8.201: 98.2863% ( 1) 00:12:01.026 8.201 - 8.249: 98.2939% ( 1) 00:12:01.026 8.296 - 8.344: 98.3016% ( 1) 00:12:01.026 8.344 - 8.391: 98.3092% ( 1) 00:12:01.026 8.439 - 8.486: 98.3245% ( 2) 00:12:01.026 8.486 - 8.533: 98.3322% ( 1) 00:12:01.026 8.581 - 8.628: 98.3398% ( 1) 00:12:01.026 8.628 - 8.676: 98.3551% ( 2) 00:12:01.026 8.676 - 8.723: 98.3781% ( 3) 00:12:01.026 8.723 - 8.770: 98.3857% ( 1) 00:12:01.026 8.770 - 8.818: 98.3934% ( 1) 00:12:01.026 8.865 - 8.913: 98.4087% ( 2) 00:12:01.026 8.913 - 8.960: 98.4163% ( 1) 00:12:01.026 8.960 - 9.007: 98.4240% ( 1) 00:12:01.026 9.007 - 9.055: 98.4316% ( 1) 00:12:01.026 9.055 - 9.102: 98.4393% ( 1) 00:12:01.026 9.102 - 9.150: 98.4469% ( 1) 00:12:01.026 9.434 - 9.481: 98.4546% ( 1) 00:12:01.026 9.481 - 9.529: 98.4622% ( 1) 00:12:01.026 9.529 - 9.576: 98.4699% ( 1) 00:12:01.026 9.576 - 9.624: 98.4775% ( 1) 00:12:01.026 9.624 - 9.671: 98.4852% ( 1) 00:12:01.026 9.719 - 9.766: 98.4928% ( 1) 00:12:01.026 9.766 - 9.813: 98.5005% ( 1) 00:12:01.026 9.908 - 9.956: 98.5081% ( 1) 00:12:01.026 10.050 - 10.098: 98.5234% ( 2) 00:12:01.026 10.098 - 10.145: 98.5311% ( 1) 00:12:01.026 10.145 - 10.193: 98.5387% ( 1) 00:12:01.026 10.335 - 10.382: 98.5464% ( 1) 00:12:01.026 10.382 - 10.430: 98.5541% ( 1) 00:12:01.026 10.477 - 10.524: 98.5617% ( 1) 00:12:01.026 10.524 - 10.572: 98.5694% ( 1) 00:12:01.026 10.572 - 10.619: 98.5770% ( 1) 00:12:01.026 11.046 - 11.093: 98.5923% ( 2) 00:12:01.026 11.141 - 11.188: 98.6000% ( 1) 00:12:01.026 11.188 - 11.236: 98.6076% ( 1) 00:12:01.026 11.520 - 11.567: 98.6306% ( 3) 00:12:01.026 11.615 - 11.662: 98.6382% ( 1) 00:12:01.026 11.662 - 11.710: 98.6459% ( 1) 00:12:01.026 11.804 - 11.852: 98.6535% ( 1) 00:12:01.026 12.041 - 12.089: 98.6612% ( 1) 00:12:01.026 12.089 - 12.136: 98.6688% ( 1) 00:12:01.026 12.136 - 12.231: 98.6918% ( 3) 00:12:01.026 12.231 - 12.326: 98.7071% ( 2) 00:12:01.026 12.326 - 12.421: 98.7147% ( 1) 00:12:01.026 12.421 - 12.516: 98.7224% ( 1) 00:12:01.026 12.610 - 12.705: 98.7300% ( 1) 00:12:01.026 12.800 - 12.895: 98.7377% ( 1) 00:12:01.026 13.464 - 13.559: 98.7606% ( 3) 00:12:01.026 13.559 - 13.653: 98.7683% ( 1) 00:12:01.026 13.843 - 13.938: 98.7759% ( 1) 00:12:01.026 14.033 - 14.127: 98.7836% ( 1) 00:12:01.026 14.317 - 14.412: 98.7912% ( 1) 00:12:01.026 14.412 - 14.507: 98.8142% ( 3) 00:12:01.026 14.791 - 14.886: 98.8295% ( 2) 00:12:01.026 14.981 - 15.076: 98.8371% ( 1) 00:12:01.026 17.161 - 17.256: 98.8448% ( 1) 00:12:01.026 17.256 - 17.351: 98.8601% ( 2) 00:12:01.026 17.351 - 17.446: 98.8754% ( 2) 00:12:01.026 17.446 - 17.541: 98.9289% ( 7) 00:12:01.026 17.541 - 17.636: 98.9519% ( 3) 00:12:01.026 17.636 - 17.730: 99.0360% ( 11) 00:12:01.026 17.730 - 17.825: 99.0743% ( 5) 00:12:01.026 17.825 - 17.920: 99.1355% ( 8) 00:12:01.026 17.920 - 18.015: 99.1737% ( 5) 00:12:01.026 18.015 - 18.110: 99.2196% ( 6) 00:12:01.026 18.110 - 18.204: 99.3268% ( 14) 00:12:01.026 18.204 - 18.299: 99.4033% ( 10) 00:12:01.026 18.299 - 18.394: 99.4721% ( 9) 00:12:01.026 18.394 - 18.489: 99.5410% ( 9) 00:12:01.026 18.489 - 18.584: 99.6404% ( 13) 00:12:01.026 18.584 - 18.679: 99.6787% ( 5) 00:12:01.026 18.679 - 18.773: 99.7246% ( 6) 00:12:01.026 18.773 - 18.868: 99.7552% ( 4) 00:12:01.026 18.868 - 18.963: 99.7705% ( 2) 00:12:01.026 18.963 - 19.058: 99.7781% ( 1) 00:12:01.026 19.058 - 19.153: 99.7858% ( 1) 00:12:01.026 19.247 - 19.342: 99.7934% ( 1) 00:12:01.026 19.342 - 19.437: 99.8011% ( 1) 00:12:01.026 19.437 - 19.532: 99.8087% ( 1) 00:12:01.026 19.721 - 19.816: 99.8164% ( 1) 00:12:01.026 20.006 - 20.101: 99.8240% ( 1) 00:12:01.026 21.902 - 21.997: 99.8317% ( 1) 00:12:01.026 22.376 - 22.471: 99.8393% ( 1) 00:12:01.026 22.661 - 22.756: 99.8470% ( 1) 00:12:01.026 23.135 - 23.230: 99.8546% ( 1) 00:12:01.026 24.462 - 24.652: 99.8623% ( 1) 00:12:01.026 27.686 - 27.876: 99.8699% ( 1) 00:12:01.026 30.341 - 30.530: 99.8776% ( 1) 00:12:01.026 3980.705 - 4004.978: 99.9541% ( 10) 00:12:01.026 4004.978 - 4029.250: 100.0000% ( 6) 00:12:01.026 00:12:01.026 Complete histogram 00:12:01.026 ================== 00:12:01.026 Range in us Cumulative Count 00:12:01.026 2.062 - 2.074: 2.9225% ( 382) 00:12:01.026 2.074 - 2.086: 30.0666% ( 3548) 00:12:01.026 2.086 - 2.098: 36.6613% ( 862) 00:12:01.026 2.098 - 2.110: 43.8222% ( 936) 00:12:01.026 2.110 - 2.121: 59.6129% ( 2064) 00:12:01.026 2.121 - 2.133: 62.0075% ( 313) 00:12:01.026 2.133 - 2.145: 66.5749% ( 597) 00:12:01.026 2.145 - 2.157: 73.5368% ( 910) 00:12:01.026 2.157 - 2.169: 74.5544% ( 133) 00:12:01.026 2.169 - 2.181: 78.0353% ( 455) 00:12:01.026 2.181 - 2.193: 82.0213% ( 521) 00:12:01.026 2.193 - 2.204: 82.8705% ( 111) 00:12:01.026 2.204 - 2.216: 84.3853% ( 198) 00:12:01.026 2.216 - 2.228: 88.0805% ( 483) 00:12:01.026 2.228 - 2.240: 90.1079% ( 265) 00:12:01.026 2.240 - 2.252: 91.9976% ( 247) 00:12:01.026 2.252 - 2.264: 93.5889% ( 208) 00:12:01.026 2.264 - 2.276: 93.9331% ( 45) 00:12:01.026 2.276 - 2.287: 94.1856% ( 33) 00:12:01.026 2.287 - 2.299: 94.5528% ( 48) 00:12:01.026 2.299 - 2.311: 95.1649% ( 80) 00:12:01.026 2.311 - 2.323: 95.5168% ( 46) 00:12:01.026 2.323 - 2.335: 95.5703% ( 7) 00:12:01.026 2.335 - 2.347: 95.6009% ( 4) 00:12:01.026 2.347 - 2.359: 95.6545% ( 7) 00:12:01.026 2.359 - 2.370: 95.7004% ( 6) 00:12:01.026 2.370 - 2.382: 95.9223% ( 29) 00:12:01.026 2.382 - 2.394: 96.2283% ( 40) 00:12:01.026 2.394 - 2.406: 96.5496% ( 42) 00:12:01.026 2.406 - 2.418: 96.7638% ( 28) 00:12:01.026 2.418 - 2.430: 96.9321% ( 22) 00:12:01.026 2.430 - 2.441: 97.1923% ( 34) 00:12:01.026 2.441 - 2.453: 97.3606% ( 22) 00:12:01.026 2.453 - 2.465: 97.5824% ( 29) 00:12:01.026 2.465 - 2.477: 97.7507% ( 22) 00:12:01.026 2.477 - 2.489: 97.8655% ( 15) 00:12:01.027 2.489 - 2.501: 97.9803% ( 15) 00:12:01.027 2.501 - 2.513: 98.0568% ( 10) 00:12:01.027 2.513 - 2.524: 98.1256% ( 9) 00:12:01.027 2.524 - 2.536: 98.1715% ( 6) 00:12:01.027 2.536 - 2.548: 98.2251% ( 7) 00:12:01.027 2.548 - 2.560: 98.2480% ( 3) 00:12:01.027 2.560 - 2.572: 98.2939% ( 6) 00:12:01.027 2.572 - 2.584: 98.3092% ( 2) 00:12:01.027 2.584 - 2.596: 98.3169% ( 1) 00:12:01.027 2.596 - 2.607: 98.3245% ( 1) 00:12:01.027 2.607 - 2.619: 98.3398% ( 2) 00:12:01.027 2.631 - 2.643: 98.3628% ( 3) 00:12:01.027 2.655 - 2.667: 98.3704% ( 1) 00:12:01.027 2.702 - 2.714: 98.3781% ( 1) 00:12:01.027 2.726 - 2.738: 98.3857% ( 1) 00:12:01.027 2.750 - 2.761: 98.3934% ( 1) 00:12:01.027 2.797 - 2.809: 98.4010% ( 1) 00:12:01.027 2.833 - 2.844: 98.4087% ( 1) 00:12:01.027 2.844 - 2.856: 98.4163% ( 1) 00:12:01.027 3.200 - 3.224: 98.4240% ( 1) 00:12:01.027 3.224 - 3.247: 98.4316% ( 1) 00:12:01.027 3.247 - 3.271: 98.4469% ( 2) 00:12:01.027 3.271 - 3.295: 98.4546% ( 1) 00:12:01.027 3.295 - 3.319: 98.4699% ( 2) 00:12:01.027 3.319 - 3.342: 9[2024-07-25 13:40:57.923871] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.027 8.4928% ( 3) 00:12:01.027 3.390 - 3.413: 98.5234% ( 4) 00:12:01.027 3.413 - 3.437: 98.5311% ( 1) 00:12:01.027 3.437 - 3.461: 98.5464% ( 2) 00:12:01.027 3.461 - 3.484: 98.5617% ( 2) 00:12:01.027 3.532 - 3.556: 98.5694% ( 1) 00:12:01.027 3.556 - 3.579: 98.5770% ( 1) 00:12:01.027 3.627 - 3.650: 98.5847% ( 1) 00:12:01.027 3.650 - 3.674: 98.6000% ( 2) 00:12:01.027 3.698 - 3.721: 98.6076% ( 1) 00:12:01.027 3.721 - 3.745: 98.6153% ( 1) 00:12:01.027 3.745 - 3.769: 98.6306% ( 2) 00:12:01.027 3.769 - 3.793: 98.6382% ( 1) 00:12:01.027 3.935 - 3.959: 98.6535% ( 2) 00:12:01.027 4.124 - 4.148: 98.6612% ( 1) 00:12:01.027 5.215 - 5.239: 98.6688% ( 1) 00:12:01.027 5.286 - 5.310: 98.6765% ( 1) 00:12:01.027 5.760 - 5.784: 98.6841% ( 1) 00:12:01.027 6.044 - 6.068: 98.6918% ( 1) 00:12:01.027 6.353 - 6.400: 98.6994% ( 1) 00:12:01.027 6.400 - 6.447: 98.7071% ( 1) 00:12:01.027 6.495 - 6.542: 98.7147% ( 1) 00:12:01.027 6.542 - 6.590: 98.7224% ( 1) 00:12:01.027 6.827 - 6.874: 98.7377% ( 2) 00:12:01.027 6.969 - 7.016: 98.7530% ( 2) 00:12:01.027 7.111 - 7.159: 98.7606% ( 1) 00:12:01.027 7.206 - 7.253: 98.7683% ( 1) 00:12:01.027 7.538 - 7.585: 98.7759% ( 1) 00:12:01.027 7.917 - 7.964: 98.7912% ( 2) 00:12:01.027 8.012 - 8.059: 98.7989% ( 1) 00:12:01.027 8.296 - 8.344: 98.8065% ( 1) 00:12:01.027 8.439 - 8.486: 98.8142% ( 1) 00:12:01.027 8.723 - 8.770: 98.8295% ( 2) 00:12:01.027 10.335 - 10.382: 98.8371% ( 1) 00:12:01.027 10.761 - 10.809: 98.8448% ( 1) 00:12:01.027 11.378 - 11.425: 98.8524% ( 1) 00:12:01.027 12.705 - 12.800: 98.8601% ( 1) 00:12:01.027 15.455 - 15.550: 98.8677% ( 1) 00:12:01.027 15.550 - 15.644: 98.8830% ( 2) 00:12:01.027 15.644 - 15.739: 98.8907% ( 1) 00:12:01.027 15.739 - 15.834: 98.9136% ( 3) 00:12:01.027 15.834 - 15.929: 98.9289% ( 2) 00:12:01.027 15.929 - 16.024: 98.9901% ( 8) 00:12:01.027 16.024 - 16.119: 99.0207% ( 4) 00:12:01.027 16.119 - 16.213: 99.0360% ( 2) 00:12:01.027 16.213 - 16.308: 99.0590% ( 3) 00:12:01.027 16.308 - 16.403: 99.0666% ( 1) 00:12:01.027 16.403 - 16.498: 99.1125% ( 6) 00:12:01.027 16.498 - 16.593: 99.1967% ( 11) 00:12:01.027 16.593 - 16.687: 99.2273% ( 4) 00:12:01.027 16.687 - 16.782: 99.2426% ( 2) 00:12:01.027 16.782 - 16.877: 99.2655% ( 3) 00:12:01.027 16.877 - 16.972: 99.3115% ( 6) 00:12:01.027 16.972 - 17.067: 99.3191% ( 1) 00:12:01.027 17.067 - 17.161: 99.3497% ( 4) 00:12:01.027 17.161 - 17.256: 99.3650% ( 2) 00:12:01.027 17.256 - 17.351: 99.3803% ( 2) 00:12:01.027 18.015 - 18.110: 99.3880% ( 1) 00:12:01.027 18.394 - 18.489: 99.4033% ( 2) 00:12:01.027 21.713 - 21.807: 99.4109% ( 1) 00:12:01.027 22.376 - 22.471: 99.4186% ( 1) 00:12:01.027 124.397 - 125.156: 99.4262% ( 1) 00:12:01.027 3810.797 - 3835.070: 99.4339% ( 1) 00:12:01.027 3980.705 - 4004.978: 99.8317% ( 52) 00:12:01.027 4004.978 - 4029.250: 100.0000% ( 22) 00:12:01.027 00:12:01.027 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:01.027 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:01.027 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:01.027 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:01.027 13:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:01.285 [ 00:12:01.285 { 00:12:01.285 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:01.285 "subtype": "Discovery", 00:12:01.285 "listen_addresses": [], 00:12:01.285 "allow_any_host": true, 00:12:01.285 "hosts": [] 00:12:01.285 }, 00:12:01.285 { 00:12:01.285 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:01.285 "subtype": "NVMe", 00:12:01.285 "listen_addresses": [ 00:12:01.285 { 00:12:01.285 "trtype": "VFIOUSER", 00:12:01.285 "adrfam": "IPv4", 00:12:01.285 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:01.285 "trsvcid": "0" 00:12:01.285 } 00:12:01.285 ], 00:12:01.285 "allow_any_host": true, 00:12:01.285 "hosts": [], 00:12:01.285 "serial_number": "SPDK1", 00:12:01.285 "model_number": "SPDK bdev Controller", 00:12:01.285 "max_namespaces": 32, 00:12:01.285 "min_cntlid": 1, 00:12:01.285 "max_cntlid": 65519, 00:12:01.285 "namespaces": [ 00:12:01.285 { 00:12:01.285 "nsid": 1, 00:12:01.285 "bdev_name": "Malloc1", 00:12:01.285 "name": "Malloc1", 00:12:01.285 "nguid": "2D1F10C6A7684D3EA661A3B2D98E2921", 00:12:01.285 "uuid": "2d1f10c6-a768-4d3e-a661-a3b2d98e2921" 00:12:01.285 } 00:12:01.285 ] 00:12:01.285 }, 00:12:01.285 { 00:12:01.285 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:01.285 "subtype": "NVMe", 00:12:01.285 "listen_addresses": [ 00:12:01.285 { 00:12:01.285 "trtype": "VFIOUSER", 00:12:01.285 "adrfam": "IPv4", 00:12:01.285 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:01.285 "trsvcid": "0" 00:12:01.285 } 00:12:01.285 ], 00:12:01.285 "allow_any_host": true, 00:12:01.285 "hosts": [], 00:12:01.285 "serial_number": "SPDK2", 00:12:01.285 "model_number": "SPDK bdev Controller", 00:12:01.285 "max_namespaces": 32, 00:12:01.285 "min_cntlid": 1, 00:12:01.285 "max_cntlid": 65519, 00:12:01.285 "namespaces": [ 00:12:01.285 { 00:12:01.285 "nsid": 1, 00:12:01.285 "bdev_name": "Malloc2", 00:12:01.285 "name": "Malloc2", 00:12:01.285 "nguid": "EAF854D966BE40D399F792CF032A686F", 00:12:01.285 "uuid": "eaf854d9-66be-40d3-99f7-92cf032a686f" 00:12:01.286 } 00:12:01.286 ] 00:12:01.286 } 00:12:01.286 ] 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=541983 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:01.286 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:01.286 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.544 [2024-07-25 13:40:58.395535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:01.544 Malloc3 00:12:01.544 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:01.801 [2024-07-25 13:40:58.749267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:01.801 13:40:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:01.801 Asynchronous Event Request test 00:12:01.801 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.801 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:01.801 Registering asynchronous event callbacks... 00:12:01.801 Starting namespace attribute notice tests for all controllers... 00:12:01.801 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:01.802 aer_cb - Changed Namespace 00:12:01.802 Cleaning up... 00:12:02.061 [ 00:12:02.061 { 00:12:02.061 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:02.061 "subtype": "Discovery", 00:12:02.061 "listen_addresses": [], 00:12:02.061 "allow_any_host": true, 00:12:02.061 "hosts": [] 00:12:02.061 }, 00:12:02.061 { 00:12:02.061 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:02.061 "subtype": "NVMe", 00:12:02.061 "listen_addresses": [ 00:12:02.061 { 00:12:02.061 "trtype": "VFIOUSER", 00:12:02.061 "adrfam": "IPv4", 00:12:02.061 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:02.061 "trsvcid": "0" 00:12:02.061 } 00:12:02.061 ], 00:12:02.061 "allow_any_host": true, 00:12:02.061 "hosts": [], 00:12:02.061 "serial_number": "SPDK1", 00:12:02.061 "model_number": "SPDK bdev Controller", 00:12:02.061 "max_namespaces": 32, 00:12:02.061 "min_cntlid": 1, 00:12:02.061 "max_cntlid": 65519, 00:12:02.061 "namespaces": [ 00:12:02.061 { 00:12:02.061 "nsid": 1, 00:12:02.061 "bdev_name": "Malloc1", 00:12:02.061 "name": "Malloc1", 00:12:02.061 "nguid": "2D1F10C6A7684D3EA661A3B2D98E2921", 00:12:02.061 "uuid": "2d1f10c6-a768-4d3e-a661-a3b2d98e2921" 00:12:02.061 }, 00:12:02.061 { 00:12:02.061 "nsid": 2, 00:12:02.061 "bdev_name": "Malloc3", 00:12:02.061 "name": "Malloc3", 00:12:02.061 "nguid": "710F643980F444D0BDB06F96AB65DFAA", 00:12:02.061 "uuid": "710f6439-80f4-44d0-bdb0-6f96ab65dfaa" 00:12:02.061 } 00:12:02.061 ] 00:12:02.061 }, 00:12:02.061 { 00:12:02.061 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:02.061 "subtype": "NVMe", 00:12:02.061 "listen_addresses": [ 00:12:02.061 { 00:12:02.061 "trtype": "VFIOUSER", 00:12:02.061 "adrfam": "IPv4", 00:12:02.061 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:02.061 "trsvcid": "0" 00:12:02.061 } 00:12:02.061 ], 00:12:02.061 "allow_any_host": true, 00:12:02.061 "hosts": [], 00:12:02.061 "serial_number": "SPDK2", 00:12:02.061 "model_number": "SPDK bdev Controller", 00:12:02.061 "max_namespaces": 32, 00:12:02.061 "min_cntlid": 1, 00:12:02.061 "max_cntlid": 65519, 00:12:02.061 "namespaces": [ 00:12:02.061 { 00:12:02.061 "nsid": 1, 00:12:02.061 "bdev_name": "Malloc2", 00:12:02.061 "name": "Malloc2", 00:12:02.061 "nguid": "EAF854D966BE40D399F792CF032A686F", 00:12:02.061 "uuid": "eaf854d9-66be-40d3-99f7-92cf032a686f" 00:12:02.061 } 00:12:02.061 ] 00:12:02.061 } 00:12:02.061 ] 00:12:02.061 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 541983 00:12:02.061 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:02.061 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:02.061 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:02.061 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:02.061 [2024-07-25 13:40:59.026990] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:02.061 [2024-07-25 13:40:59.027032] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541994 ] 00:12:02.061 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.061 [2024-07-25 13:40:59.060223] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:02.061 [2024-07-25 13:40:59.069359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:02.061 [2024-07-25 13:40:59.069405] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f68d5737000 00:12:02.061 [2024-07-25 13:40:59.070379] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.061 [2024-07-25 13:40:59.071386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.061 [2024-07-25 13:40:59.072406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.061 [2024-07-25 13:40:59.073412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:02.062 [2024-07-25 13:40:59.074420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:02.062 [2024-07-25 13:40:59.075437] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.062 [2024-07-25 13:40:59.076429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:02.062 [2024-07-25 13:40:59.077438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:02.062 [2024-07-25 13:40:59.078462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:02.062 [2024-07-25 13:40:59.078484] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f68d572c000 00:12:02.062 [2024-07-25 13:40:59.079612] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:02.321 [2024-07-25 13:40:59.098568] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:02.321 [2024-07-25 13:40:59.098606] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:02.321 [2024-07-25 13:40:59.100701] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:02.321 [2024-07-25 13:40:59.100756] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:02.321 [2024-07-25 13:40:59.100863] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:02.321 [2024-07-25 13:40:59.100890] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:02.321 [2024-07-25 13:40:59.100901] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:02.321 [2024-07-25 13:40:59.101705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:02.321 [2024-07-25 13:40:59.101732] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:02.322 [2024-07-25 13:40:59.101746] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:02.322 [2024-07-25 13:40:59.102718] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:02.322 [2024-07-25 13:40:59.102738] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:02.322 [2024-07-25 13:40:59.102753] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:02.322 [2024-07-25 13:40:59.103725] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:02.322 [2024-07-25 13:40:59.103746] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:02.322 [2024-07-25 13:40:59.104733] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:02.322 [2024-07-25 13:40:59.104754] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:02.322 [2024-07-25 13:40:59.104764] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:02.322 [2024-07-25 13:40:59.104776] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:02.322 [2024-07-25 13:40:59.104885] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:02.322 [2024-07-25 13:40:59.104894] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:02.322 [2024-07-25 13:40:59.104902] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:02.322 [2024-07-25 13:40:59.105736] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:02.322 [2024-07-25 13:40:59.106739] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:02.322 [2024-07-25 13:40:59.107746] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:02.322 [2024-07-25 13:40:59.108748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:02.322 [2024-07-25 13:40:59.108814] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:02.322 [2024-07-25 13:40:59.109764] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:02.322 [2024-07-25 13:40:59.109785] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:02.322 [2024-07-25 13:40:59.109794] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.109822] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:02.322 [2024-07-25 13:40:59.109836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.109858] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:02.322 [2024-07-25 13:40:59.109868] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:02.322 [2024-07-25 13:40:59.109875] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:02.322 [2024-07-25 13:40:59.109895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:02.322 [2024-07-25 13:40:59.116076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:02.322 [2024-07-25 13:40:59.116100] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:02.322 [2024-07-25 13:40:59.116110] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:02.322 [2024-07-25 13:40:59.116118] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:02.322 [2024-07-25 13:40:59.116126] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:02.322 [2024-07-25 13:40:59.116135] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:02.322 [2024-07-25 13:40:59.116143] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:02.322 [2024-07-25 13:40:59.116152] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.116166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.116187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:02.322 [2024-07-25 13:40:59.124071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:02.322 [2024-07-25 13:40:59.124101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.322 [2024-07-25 13:40:59.124116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.322 [2024-07-25 13:40:59.124128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.322 [2024-07-25 13:40:59.124140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.322 [2024-07-25 13:40:59.124149] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.124164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.124180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:02.322 [2024-07-25 13:40:59.132087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:02.322 [2024-07-25 13:40:59.132111] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:02.322 [2024-07-25 13:40:59.132121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.132137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.132149] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.132163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:02.322 [2024-07-25 13:40:59.140089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:02.322 [2024-07-25 13:40:59.140172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.140189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.140203] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:02.322 [2024-07-25 13:40:59.140212] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:02.322 [2024-07-25 13:40:59.140219] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:02.322 [2024-07-25 13:40:59.140229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:02.322 [2024-07-25 13:40:59.148075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:02.322 [2024-07-25 13:40:59.148099] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:02.322 [2024-07-25 13:40:59.148121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.148137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:02.322 [2024-07-25 13:40:59.148151] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:02.322 [2024-07-25 13:40:59.148160] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:02.322 [2024-07-25 13:40:59.148167] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:02.322 [2024-07-25 13:40:59.148177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:02.323 [2024-07-25 13:40:59.156074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:02.323 [2024-07-25 13:40:59.156114] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:02.323 [2024-07-25 13:40:59.156133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:02.323 [2024-07-25 13:40:59.156147] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:02.323 [2024-07-25 13:40:59.156156] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:02.323 [2024-07-25 13:40:59.156162] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:02.323 [2024-07-25 13:40:59.156172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:02.323 [2024-07-25 13:40:59.164073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:02.323 [2024-07-25 13:40:59.164096] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:02.323 [2024-07-25 13:40:59.164133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:02.323 [2024-07-25 13:40:59.164150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:02.323 [2024-07-25 13:40:59.164165] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:02.323 [2024-07-25 13:40:59.164175] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:02.323 [2024-07-25 13:40:59.164184] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:02.323 [2024-07-25 13:40:59.164194] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:02.323 [2024-07-25 13:40:59.164202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:02.323 [2024-07-25 13:40:59.164211] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:02.323 [2024-07-25 13:40:59.164238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:02.323 [2024-07-25 13:40:59.172074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:02.323 [2024-07-25 13:40:59.172105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:02.323 [2024-07-25 13:40:59.180072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:02.323 [2024-07-25 13:40:59.180098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:02.323 [2024-07-25 13:40:59.188073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:02.323 [2024-07-25 13:40:59.188098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:02.323 [2024-07-25 13:40:59.196071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:02.323 [2024-07-25 13:40:59.196119] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:02.323 [2024-07-25 13:40:59.196131] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:02.323 [2024-07-25 13:40:59.196138] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:02.323 [2024-07-25 13:40:59.196145] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:02.323 [2024-07-25 13:40:59.196151] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:02.323 [2024-07-25 13:40:59.196162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:02.323 [2024-07-25 13:40:59.196175] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:02.323 [2024-07-25 13:40:59.196184] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:02.323 [2024-07-25 13:40:59.196194] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:02.323 [2024-07-25 13:40:59.196204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:02.323 [2024-07-25 13:40:59.196217] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:02.323 [2024-07-25 13:40:59.196226] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:02.323 [2024-07-25 13:40:59.196232] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:02.323 [2024-07-25 13:40:59.196241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:02.323 [2024-07-25 13:40:59.196255] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:02.323 [2024-07-25 13:40:59.196263] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:02.323 [2024-07-25 13:40:59.196270] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:02.323 [2024-07-25 13:40:59.196280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:02.323 [2024-07-25 13:40:59.204074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:02.323 [2024-07-25 13:40:59.204130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:02.323 [2024-07-25 13:40:59.204149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:02.323 [2024-07-25 13:40:59.204162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:02.323 ===================================================== 00:12:02.323 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:02.323 ===================================================== 00:12:02.323 Controller Capabilities/Features 00:12:02.323 ================================ 00:12:02.323 Vendor ID: 4e58 00:12:02.323 Subsystem Vendor ID: 4e58 00:12:02.323 Serial Number: SPDK2 00:12:02.323 Model Number: SPDK bdev Controller 00:12:02.323 Firmware Version: 24.09 00:12:02.323 Recommended Arb Burst: 6 00:12:02.323 IEEE OUI Identifier: 8d 6b 50 00:12:02.323 Multi-path I/O 00:12:02.323 May have multiple subsystem ports: Yes 00:12:02.323 May have multiple controllers: Yes 00:12:02.323 Associated with SR-IOV VF: No 00:12:02.323 Max Data Transfer Size: 131072 00:12:02.323 Max Number of Namespaces: 32 00:12:02.323 Max Number of I/O Queues: 127 00:12:02.323 NVMe Specification Version (VS): 1.3 00:12:02.323 NVMe Specification Version (Identify): 1.3 00:12:02.323 Maximum Queue Entries: 256 00:12:02.323 Contiguous Queues Required: Yes 00:12:02.323 Arbitration Mechanisms Supported 00:12:02.323 Weighted Round Robin: Not Supported 00:12:02.323 Vendor Specific: Not Supported 00:12:02.323 Reset Timeout: 15000 ms 00:12:02.323 Doorbell Stride: 4 bytes 00:12:02.323 NVM Subsystem Reset: Not Supported 00:12:02.323 Command Sets Supported 00:12:02.323 NVM Command Set: Supported 00:12:02.323 Boot Partition: Not Supported 00:12:02.323 Memory Page Size Minimum: 4096 bytes 00:12:02.323 Memory Page Size Maximum: 4096 bytes 00:12:02.323 Persistent Memory Region: Not Supported 00:12:02.323 Optional Asynchronous Events Supported 00:12:02.323 Namespace Attribute Notices: Supported 00:12:02.323 Firmware Activation Notices: Not Supported 00:12:02.323 ANA Change Notices: Not Supported 00:12:02.323 PLE Aggregate Log Change Notices: Not Supported 00:12:02.323 LBA Status Info Alert Notices: Not Supported 00:12:02.323 EGE Aggregate Log Change Notices: Not Supported 00:12:02.323 Normal NVM Subsystem Shutdown event: Not Supported 00:12:02.323 Zone Descriptor Change Notices: Not Supported 00:12:02.323 Discovery Log Change Notices: Not Supported 00:12:02.323 Controller Attributes 00:12:02.323 128-bit Host Identifier: Supported 00:12:02.323 Non-Operational Permissive Mode: Not Supported 00:12:02.323 NVM Sets: Not Supported 00:12:02.323 Read Recovery Levels: Not Supported 00:12:02.323 Endurance Groups: Not Supported 00:12:02.323 Predictable Latency Mode: Not Supported 00:12:02.323 Traffic Based Keep ALive: Not Supported 00:12:02.323 Namespace Granularity: Not Supported 00:12:02.323 SQ Associations: Not Supported 00:12:02.323 UUID List: Not Supported 00:12:02.323 Multi-Domain Subsystem: Not Supported 00:12:02.323 Fixed Capacity Management: Not Supported 00:12:02.323 Variable Capacity Management: Not Supported 00:12:02.323 Delete Endurance Group: Not Supported 00:12:02.323 Delete NVM Set: Not Supported 00:12:02.323 Extended LBA Formats Supported: Not Supported 00:12:02.323 Flexible Data Placement Supported: Not Supported 00:12:02.324 00:12:02.324 Controller Memory Buffer Support 00:12:02.324 ================================ 00:12:02.324 Supported: No 00:12:02.324 00:12:02.324 Persistent Memory Region Support 00:12:02.324 ================================ 00:12:02.324 Supported: No 00:12:02.324 00:12:02.324 Admin Command Set Attributes 00:12:02.324 ============================ 00:12:02.324 Security Send/Receive: Not Supported 00:12:02.324 Format NVM: Not Supported 00:12:02.324 Firmware Activate/Download: Not Supported 00:12:02.324 Namespace Management: Not Supported 00:12:02.324 Device Self-Test: Not Supported 00:12:02.324 Directives: Not Supported 00:12:02.324 NVMe-MI: Not Supported 00:12:02.324 Virtualization Management: Not Supported 00:12:02.324 Doorbell Buffer Config: Not Supported 00:12:02.324 Get LBA Status Capability: Not Supported 00:12:02.324 Command & Feature Lockdown Capability: Not Supported 00:12:02.324 Abort Command Limit: 4 00:12:02.324 Async Event Request Limit: 4 00:12:02.324 Number of Firmware Slots: N/A 00:12:02.324 Firmware Slot 1 Read-Only: N/A 00:12:02.324 Firmware Activation Without Reset: N/A 00:12:02.324 Multiple Update Detection Support: N/A 00:12:02.324 Firmware Update Granularity: No Information Provided 00:12:02.324 Per-Namespace SMART Log: No 00:12:02.324 Asymmetric Namespace Access Log Page: Not Supported 00:12:02.324 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:02.324 Command Effects Log Page: Supported 00:12:02.324 Get Log Page Extended Data: Supported 00:12:02.324 Telemetry Log Pages: Not Supported 00:12:02.324 Persistent Event Log Pages: Not Supported 00:12:02.324 Supported Log Pages Log Page: May Support 00:12:02.324 Commands Supported & Effects Log Page: Not Supported 00:12:02.324 Feature Identifiers & Effects Log Page:May Support 00:12:02.324 NVMe-MI Commands & Effects Log Page: May Support 00:12:02.324 Data Area 4 for Telemetry Log: Not Supported 00:12:02.324 Error Log Page Entries Supported: 128 00:12:02.324 Keep Alive: Supported 00:12:02.324 Keep Alive Granularity: 10000 ms 00:12:02.324 00:12:02.324 NVM Command Set Attributes 00:12:02.324 ========================== 00:12:02.324 Submission Queue Entry Size 00:12:02.324 Max: 64 00:12:02.324 Min: 64 00:12:02.324 Completion Queue Entry Size 00:12:02.324 Max: 16 00:12:02.324 Min: 16 00:12:02.324 Number of Namespaces: 32 00:12:02.324 Compare Command: Supported 00:12:02.324 Write Uncorrectable Command: Not Supported 00:12:02.324 Dataset Management Command: Supported 00:12:02.324 Write Zeroes Command: Supported 00:12:02.324 Set Features Save Field: Not Supported 00:12:02.324 Reservations: Not Supported 00:12:02.324 Timestamp: Not Supported 00:12:02.324 Copy: Supported 00:12:02.324 Volatile Write Cache: Present 00:12:02.324 Atomic Write Unit (Normal): 1 00:12:02.324 Atomic Write Unit (PFail): 1 00:12:02.324 Atomic Compare & Write Unit: 1 00:12:02.324 Fused Compare & Write: Supported 00:12:02.324 Scatter-Gather List 00:12:02.324 SGL Command Set: Supported (Dword aligned) 00:12:02.324 SGL Keyed: Not Supported 00:12:02.324 SGL Bit Bucket Descriptor: Not Supported 00:12:02.324 SGL Metadata Pointer: Not Supported 00:12:02.324 Oversized SGL: Not Supported 00:12:02.324 SGL Metadata Address: Not Supported 00:12:02.324 SGL Offset: Not Supported 00:12:02.324 Transport SGL Data Block: Not Supported 00:12:02.324 Replay Protected Memory Block: Not Supported 00:12:02.324 00:12:02.324 Firmware Slot Information 00:12:02.324 ========================= 00:12:02.324 Active slot: 1 00:12:02.324 Slot 1 Firmware Revision: 24.09 00:12:02.324 00:12:02.324 00:12:02.324 Commands Supported and Effects 00:12:02.324 ============================== 00:12:02.324 Admin Commands 00:12:02.324 -------------- 00:12:02.324 Get Log Page (02h): Supported 00:12:02.324 Identify (06h): Supported 00:12:02.324 Abort (08h): Supported 00:12:02.324 Set Features (09h): Supported 00:12:02.324 Get Features (0Ah): Supported 00:12:02.324 Asynchronous Event Request (0Ch): Supported 00:12:02.324 Keep Alive (18h): Supported 00:12:02.324 I/O Commands 00:12:02.324 ------------ 00:12:02.324 Flush (00h): Supported LBA-Change 00:12:02.324 Write (01h): Supported LBA-Change 00:12:02.324 Read (02h): Supported 00:12:02.324 Compare (05h): Supported 00:12:02.324 Write Zeroes (08h): Supported LBA-Change 00:12:02.324 Dataset Management (09h): Supported LBA-Change 00:12:02.324 Copy (19h): Supported LBA-Change 00:12:02.324 00:12:02.324 Error Log 00:12:02.324 ========= 00:12:02.324 00:12:02.324 Arbitration 00:12:02.324 =========== 00:12:02.324 Arbitration Burst: 1 00:12:02.324 00:12:02.324 Power Management 00:12:02.324 ================ 00:12:02.324 Number of Power States: 1 00:12:02.324 Current Power State: Power State #0 00:12:02.324 Power State #0: 00:12:02.324 Max Power: 0.00 W 00:12:02.324 Non-Operational State: Operational 00:12:02.324 Entry Latency: Not Reported 00:12:02.324 Exit Latency: Not Reported 00:12:02.324 Relative Read Throughput: 0 00:12:02.324 Relative Read Latency: 0 00:12:02.324 Relative Write Throughput: 0 00:12:02.324 Relative Write Latency: 0 00:12:02.324 Idle Power: Not Reported 00:12:02.324 Active Power: Not Reported 00:12:02.324 Non-Operational Permissive Mode: Not Supported 00:12:02.324 00:12:02.324 Health Information 00:12:02.324 ================== 00:12:02.324 Critical Warnings: 00:12:02.324 Available Spare Space: OK 00:12:02.324 Temperature: OK 00:12:02.324 Device Reliability: OK 00:12:02.324 Read Only: No 00:12:02.324 Volatile Memory Backup: OK 00:12:02.324 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:02.324 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:02.324 Available Spare: 0% 00:12:02.324 Available Sp[2024-07-25 13:40:59.204287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:02.324 [2024-07-25 13:40:59.212075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:02.324 [2024-07-25 13:40:59.212142] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:02.324 [2024-07-25 13:40:59.212161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.324 [2024-07-25 13:40:59.212172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.324 [2024-07-25 13:40:59.212183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.324 [2024-07-25 13:40:59.212194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.324 [2024-07-25 13:40:59.212265] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:02.324 [2024-07-25 13:40:59.212287] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:02.324 [2024-07-25 13:40:59.213268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:02.324 [2024-07-25 13:40:59.213339] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:02.324 [2024-07-25 13:40:59.213354] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:02.324 [2024-07-25 13:40:59.214275] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:02.324 [2024-07-25 13:40:59.214303] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:02.324 [2024-07-25 13:40:59.214368] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:02.324 [2024-07-25 13:40:59.215563] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:02.324 are Threshold: 0% 00:12:02.324 Life Percentage Used: 0% 00:12:02.324 Data Units Read: 0 00:12:02.324 Data Units Written: 0 00:12:02.324 Host Read Commands: 0 00:12:02.324 Host Write Commands: 0 00:12:02.324 Controller Busy Time: 0 minutes 00:12:02.324 Power Cycles: 0 00:12:02.324 Power On Hours: 0 hours 00:12:02.324 Unsafe Shutdowns: 0 00:12:02.324 Unrecoverable Media Errors: 0 00:12:02.324 Lifetime Error Log Entries: 0 00:12:02.324 Warning Temperature Time: 0 minutes 00:12:02.324 Critical Temperature Time: 0 minutes 00:12:02.325 00:12:02.325 Number of Queues 00:12:02.325 ================ 00:12:02.325 Number of I/O Submission Queues: 127 00:12:02.325 Number of I/O Completion Queues: 127 00:12:02.325 00:12:02.325 Active Namespaces 00:12:02.325 ================= 00:12:02.325 Namespace ID:1 00:12:02.325 Error Recovery Timeout: Unlimited 00:12:02.325 Command Set Identifier: NVM (00h) 00:12:02.325 Deallocate: Supported 00:12:02.325 Deallocated/Unwritten Error: Not Supported 00:12:02.325 Deallocated Read Value: Unknown 00:12:02.325 Deallocate in Write Zeroes: Not Supported 00:12:02.325 Deallocated Guard Field: 0xFFFF 00:12:02.325 Flush: Supported 00:12:02.325 Reservation: Supported 00:12:02.325 Namespace Sharing Capabilities: Multiple Controllers 00:12:02.325 Size (in LBAs): 131072 (0GiB) 00:12:02.325 Capacity (in LBAs): 131072 (0GiB) 00:12:02.325 Utilization (in LBAs): 131072 (0GiB) 00:12:02.325 NGUID: EAF854D966BE40D399F792CF032A686F 00:12:02.325 UUID: eaf854d9-66be-40d3-99f7-92cf032a686f 00:12:02.325 Thin Provisioning: Not Supported 00:12:02.325 Per-NS Atomic Units: Yes 00:12:02.325 Atomic Boundary Size (Normal): 0 00:12:02.325 Atomic Boundary Size (PFail): 0 00:12:02.325 Atomic Boundary Offset: 0 00:12:02.325 Maximum Single Source Range Length: 65535 00:12:02.325 Maximum Copy Length: 65535 00:12:02.325 Maximum Source Range Count: 1 00:12:02.325 NGUID/EUI64 Never Reused: No 00:12:02.325 Namespace Write Protected: No 00:12:02.325 Number of LBA Formats: 1 00:12:02.325 Current LBA Format: LBA Format #00 00:12:02.325 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:02.325 00:12:02.325 13:40:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:02.325 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.584 [2024-07-25 13:40:59.446905] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:07.853 Initializing NVMe Controllers 00:12:07.853 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:07.853 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:07.853 Initialization complete. Launching workers. 00:12:07.853 ======================================================== 00:12:07.853 Latency(us) 00:12:07.853 Device Information : IOPS MiB/s Average min max 00:12:07.853 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33210.24 129.73 3853.42 1193.21 7444.54 00:12:07.853 ======================================================== 00:12:07.853 Total : 33210.24 129.73 3853.42 1193.21 7444.54 00:12:07.853 00:12:07.853 [2024-07-25 13:41:04.554468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:07.853 13:41:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:07.853 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.853 [2024-07-25 13:41:04.797142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:13.130 Initializing NVMe Controllers 00:12:13.131 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:13.131 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:13.131 Initialization complete. Launching workers. 00:12:13.131 ======================================================== 00:12:13.131 Latency(us) 00:12:13.131 Device Information : IOPS MiB/s Average min max 00:12:13.131 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31414.20 122.71 4076.61 1221.40 9239.13 00:12:13.131 ======================================================== 00:12:13.131 Total : 31414.20 122.71 4076.61 1221.40 9239.13 00:12:13.131 00:12:13.131 [2024-07-25 13:41:09.822756] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:13.131 13:41:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:13.131 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.131 [2024-07-25 13:41:10.043033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.411 [2024-07-25 13:41:15.200207] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.411 Initializing NVMe Controllers 00:12:18.411 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:18.411 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:18.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:18.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:18.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:18.411 Initialization complete. Launching workers. 00:12:18.411 Starting thread on core 2 00:12:18.411 Starting thread on core 3 00:12:18.411 Starting thread on core 1 00:12:18.411 13:41:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:18.411 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.670 [2024-07-25 13:41:15.497532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:21.957 [2024-07-25 13:41:18.557534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:21.957 Initializing NVMe Controllers 00:12:21.957 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.957 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.957 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:21.957 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:21.957 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:21.957 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:21.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:21.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:21.957 Initialization complete. Launching workers. 00:12:21.957 Starting thread on core 1 with urgent priority queue 00:12:21.957 Starting thread on core 2 with urgent priority queue 00:12:21.957 Starting thread on core 3 with urgent priority queue 00:12:21.957 Starting thread on core 0 with urgent priority queue 00:12:21.957 SPDK bdev Controller (SPDK2 ) core 0: 6557.67 IO/s 15.25 secs/100000 ios 00:12:21.957 SPDK bdev Controller (SPDK2 ) core 1: 6449.00 IO/s 15.51 secs/100000 ios 00:12:21.957 SPDK bdev Controller (SPDK2 ) core 2: 7137.67 IO/s 14.01 secs/100000 ios 00:12:21.957 SPDK bdev Controller (SPDK2 ) core 3: 5730.00 IO/s 17.45 secs/100000 ios 00:12:21.957 ======================================================== 00:12:21.957 00:12:21.957 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:21.957 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.957 [2024-07-25 13:41:18.856557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:21.957 Initializing NVMe Controllers 00:12:21.957 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.957 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:21.957 Namespace ID: 1 size: 0GB 00:12:21.957 Initialization complete. 00:12:21.957 INFO: using host memory buffer for IO 00:12:21.957 Hello world! 00:12:21.958 [2024-07-25 13:41:18.870633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:21.958 13:41:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:21.958 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.215 [2024-07-25 13:41:19.170304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.594 Initializing NVMe Controllers 00:12:23.594 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:23.594 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:23.594 Initialization complete. Launching workers. 00:12:23.594 submit (in ns) avg, min, max = 8621.3, 3513.3, 4035525.6 00:12:23.594 complete (in ns) avg, min, max = 25092.4, 2060.0, 4020096.7 00:12:23.594 00:12:23.594 Submit histogram 00:12:23.594 ================ 00:12:23.594 Range in us Cumulative Count 00:12:23.594 3.508 - 3.532: 0.3422% ( 45) 00:12:23.594 3.532 - 3.556: 1.2774% ( 123) 00:12:23.594 3.556 - 3.579: 3.6724% ( 315) 00:12:23.594 3.579 - 3.603: 8.8656% ( 683) 00:12:23.594 3.603 - 3.627: 16.3853% ( 989) 00:12:23.594 3.627 - 3.650: 26.0341% ( 1269) 00:12:23.594 3.650 - 3.674: 33.4322% ( 973) 00:12:23.594 3.674 - 3.698: 40.1156% ( 879) 00:12:23.594 3.698 - 3.721: 46.0462% ( 780) 00:12:23.595 3.721 - 3.745: 51.0265% ( 655) 00:12:23.595 3.745 - 3.769: 54.9650% ( 518) 00:12:23.595 3.769 - 3.793: 58.6147% ( 480) 00:12:23.595 3.793 - 3.816: 61.7625% ( 414) 00:12:23.595 3.816 - 3.840: 65.3361% ( 470) 00:12:23.595 3.840 - 3.864: 69.3887% ( 533) 00:12:23.595 3.864 - 3.887: 73.1524% ( 495) 00:12:23.595 3.887 - 3.911: 76.7640% ( 475) 00:12:23.595 3.911 - 3.935: 79.3872% ( 345) 00:12:23.595 3.935 - 3.959: 81.8583% ( 325) 00:12:23.595 3.959 - 3.982: 83.5006% ( 216) 00:12:23.595 3.982 - 4.006: 84.9909% ( 196) 00:12:23.595 4.006 - 4.030: 86.1618% ( 154) 00:12:23.595 4.030 - 4.053: 87.1807% ( 134) 00:12:23.595 4.053 - 4.077: 88.1159% ( 123) 00:12:23.595 4.077 - 4.101: 88.9446% ( 109) 00:12:23.595 4.101 - 4.124: 89.7050% ( 100) 00:12:23.595 4.124 - 4.148: 90.2372% ( 70) 00:12:23.595 4.148 - 4.172: 90.6858% ( 59) 00:12:23.595 4.172 - 4.196: 90.9976% ( 41) 00:12:23.595 4.196 - 4.219: 91.2561% ( 34) 00:12:23.595 4.219 - 4.243: 91.4614% ( 27) 00:12:23.595 4.243 - 4.267: 91.6058% ( 19) 00:12:23.595 4.267 - 4.290: 91.7275% ( 16) 00:12:23.595 4.290 - 4.314: 91.8339% ( 14) 00:12:23.595 4.314 - 4.338: 91.9784% ( 19) 00:12:23.595 4.338 - 4.361: 92.0773% ( 13) 00:12:23.595 4.361 - 4.385: 92.1913% ( 15) 00:12:23.595 4.385 - 4.409: 92.3130% ( 16) 00:12:23.595 4.409 - 4.433: 92.3890% ( 10) 00:12:23.595 4.433 - 4.456: 92.5259% ( 18) 00:12:23.595 4.456 - 4.480: 92.6019% ( 10) 00:12:23.595 4.480 - 4.504: 92.6931% ( 12) 00:12:23.595 4.504 - 4.527: 92.8072% ( 15) 00:12:23.595 4.527 - 4.551: 92.8832% ( 10) 00:12:23.595 4.551 - 4.575: 92.9745% ( 12) 00:12:23.595 4.575 - 4.599: 93.0429% ( 9) 00:12:23.595 4.599 - 4.622: 93.1341% ( 12) 00:12:23.595 4.622 - 4.646: 93.2482% ( 15) 00:12:23.595 4.646 - 4.670: 93.3394% ( 12) 00:12:23.595 4.670 - 4.693: 93.4307% ( 12) 00:12:23.595 4.693 - 4.717: 93.5219% ( 12) 00:12:23.595 4.717 - 4.741: 93.6892% ( 22) 00:12:23.595 4.741 - 4.764: 93.7652% ( 10) 00:12:23.595 4.764 - 4.788: 93.9173% ( 20) 00:12:23.595 4.788 - 4.812: 94.1074% ( 25) 00:12:23.595 4.812 - 4.836: 94.2290% ( 16) 00:12:23.595 4.836 - 4.859: 94.3811% ( 20) 00:12:23.595 4.859 - 4.883: 94.5636% ( 24) 00:12:23.595 4.883 - 4.907: 94.7613% ( 26) 00:12:23.595 4.907 - 4.930: 94.9133% ( 20) 00:12:23.595 4.930 - 4.954: 95.0730% ( 21) 00:12:23.595 4.954 - 4.978: 95.2555% ( 24) 00:12:23.595 4.978 - 5.001: 95.4227% ( 22) 00:12:23.595 5.001 - 5.025: 95.5824% ( 21) 00:12:23.595 5.025 - 5.049: 95.7649% ( 24) 00:12:23.595 5.049 - 5.073: 95.8790% ( 15) 00:12:23.595 5.073 - 5.096: 95.9778% ( 13) 00:12:23.595 5.096 - 5.120: 96.0842% ( 14) 00:12:23.595 5.120 - 5.144: 96.1907% ( 14) 00:12:23.595 5.144 - 5.167: 96.3200% ( 17) 00:12:23.595 5.167 - 5.191: 96.4264% ( 14) 00:12:23.595 5.191 - 5.215: 96.5252% ( 13) 00:12:23.595 5.215 - 5.239: 96.6013% ( 10) 00:12:23.595 5.239 - 5.262: 96.6925% ( 12) 00:12:23.595 5.262 - 5.286: 96.7686% ( 10) 00:12:23.595 5.286 - 5.310: 96.8370% ( 9) 00:12:23.595 5.310 - 5.333: 96.9282% ( 12) 00:12:23.595 5.333 - 5.357: 96.9891% ( 8) 00:12:23.595 5.357 - 5.381: 97.0499% ( 8) 00:12:23.595 5.381 - 5.404: 97.1031% ( 7) 00:12:23.595 5.404 - 5.428: 97.1791% ( 10) 00:12:23.595 5.428 - 5.452: 97.2476% ( 9) 00:12:23.595 5.452 - 5.476: 97.2856% ( 5) 00:12:23.595 5.476 - 5.499: 97.3236% ( 5) 00:12:23.595 5.499 - 5.523: 97.3388% ( 2) 00:12:23.595 5.547 - 5.570: 97.3464% ( 1) 00:12:23.595 5.570 - 5.594: 97.3768% ( 4) 00:12:23.595 5.594 - 5.618: 97.4148% ( 5) 00:12:23.595 5.618 - 5.641: 97.4300% ( 2) 00:12:23.595 5.641 - 5.665: 97.4681% ( 5) 00:12:23.595 5.665 - 5.689: 97.4833% ( 2) 00:12:23.595 5.689 - 5.713: 97.5137% ( 4) 00:12:23.595 5.736 - 5.760: 97.5289% ( 2) 00:12:23.595 5.760 - 5.784: 97.5669% ( 5) 00:12:23.595 5.784 - 5.807: 97.5897% ( 3) 00:12:23.595 5.807 - 5.831: 97.6201% ( 4) 00:12:23.595 5.831 - 5.855: 97.6582% ( 5) 00:12:23.595 5.855 - 5.879: 97.7038% ( 6) 00:12:23.595 5.879 - 5.902: 97.7114% ( 1) 00:12:23.595 5.902 - 5.926: 97.7266% ( 2) 00:12:23.595 5.926 - 5.950: 97.7494% ( 3) 00:12:23.595 5.950 - 5.973: 97.7798% ( 4) 00:12:23.595 5.973 - 5.997: 97.7874% ( 1) 00:12:23.595 5.997 - 6.021: 97.8102% ( 3) 00:12:23.595 6.021 - 6.044: 97.8178% ( 1) 00:12:23.595 6.044 - 6.068: 97.8482% ( 4) 00:12:23.595 6.068 - 6.116: 97.8710% ( 3) 00:12:23.595 6.163 - 6.210: 97.9015% ( 4) 00:12:23.595 6.210 - 6.258: 97.9167% ( 2) 00:12:23.595 6.258 - 6.305: 97.9471% ( 4) 00:12:23.595 6.305 - 6.353: 97.9851% ( 5) 00:12:23.595 6.353 - 6.400: 97.9927% ( 1) 00:12:23.595 6.400 - 6.447: 98.0231% ( 4) 00:12:23.595 6.495 - 6.542: 98.0383% ( 2) 00:12:23.595 6.542 - 6.590: 98.0459% ( 1) 00:12:23.595 6.590 - 6.637: 98.0687% ( 3) 00:12:23.595 6.637 - 6.684: 98.0763% ( 1) 00:12:23.595 6.779 - 6.827: 98.0915% ( 2) 00:12:23.595 6.827 - 6.874: 98.1068% ( 2) 00:12:23.595 6.874 - 6.921: 98.1144% ( 1) 00:12:23.595 6.921 - 6.969: 98.1220% ( 1) 00:12:23.595 7.206 - 7.253: 98.1296% ( 1) 00:12:23.595 7.253 - 7.301: 98.1372% ( 1) 00:12:23.595 7.301 - 7.348: 98.1448% ( 1) 00:12:23.595 7.396 - 7.443: 98.1600% ( 2) 00:12:23.595 7.443 - 7.490: 98.1676% ( 1) 00:12:23.595 7.490 - 7.538: 98.1752% ( 1) 00:12:23.595 7.585 - 7.633: 98.1828% ( 1) 00:12:23.595 7.680 - 7.727: 98.1904% ( 1) 00:12:23.595 7.870 - 7.917: 98.1980% ( 1) 00:12:23.595 8.012 - 8.059: 98.2056% ( 1) 00:12:23.595 8.059 - 8.107: 98.2132% ( 1) 00:12:23.595 8.107 - 8.154: 98.2208% ( 1) 00:12:23.595 8.296 - 8.344: 98.2284% ( 1) 00:12:23.595 8.344 - 8.391: 98.2360% ( 1) 00:12:23.595 8.391 - 8.439: 98.2588% ( 3) 00:12:23.595 8.439 - 8.486: 98.2664% ( 1) 00:12:23.595 8.533 - 8.581: 98.2740% ( 1) 00:12:23.595 8.628 - 8.676: 98.2892% ( 2) 00:12:23.595 8.723 - 8.770: 98.2968% ( 1) 00:12:23.595 8.770 - 8.818: 98.3044% ( 1) 00:12:23.595 8.818 - 8.865: 98.3196% ( 2) 00:12:23.595 8.913 - 8.960: 98.3273% ( 1) 00:12:23.595 8.960 - 9.007: 98.3349% ( 1) 00:12:23.595 9.007 - 9.055: 98.3425% ( 1) 00:12:23.595 9.102 - 9.150: 98.3653% ( 3) 00:12:23.595 9.150 - 9.197: 98.3881% ( 3) 00:12:23.595 9.387 - 9.434: 98.4109% ( 3) 00:12:23.595 9.434 - 9.481: 98.4185% ( 1) 00:12:23.595 9.529 - 9.576: 98.4261% ( 1) 00:12:23.595 9.624 - 9.671: 98.4413% ( 2) 00:12:23.595 9.766 - 9.813: 98.4489% ( 1) 00:12:23.595 9.813 - 9.861: 98.4565% ( 1) 00:12:23.595 9.908 - 9.956: 98.4641% ( 1) 00:12:23.595 9.956 - 10.003: 98.4793% ( 2) 00:12:23.595 10.098 - 10.145: 98.4869% ( 1) 00:12:23.595 10.193 - 10.240: 98.4945% ( 1) 00:12:23.595 10.287 - 10.335: 98.5021% ( 1) 00:12:23.595 10.335 - 10.382: 98.5097% ( 1) 00:12:23.595 10.430 - 10.477: 98.5173% ( 1) 00:12:23.595 10.619 - 10.667: 98.5249% ( 1) 00:12:23.595 10.856 - 10.904: 98.5325% ( 1) 00:12:23.595 10.951 - 10.999: 98.5401% ( 1) 00:12:23.595 10.999 - 11.046: 98.5477% ( 1) 00:12:23.595 11.093 - 11.141: 98.5554% ( 1) 00:12:23.595 11.188 - 11.236: 98.5630% ( 1) 00:12:23.595 11.236 - 11.283: 98.5782% ( 2) 00:12:23.595 11.283 - 11.330: 98.5858% ( 1) 00:12:23.595 11.330 - 11.378: 98.5934% ( 1) 00:12:23.595 11.378 - 11.425: 98.6010% ( 1) 00:12:23.595 11.804 - 11.852: 98.6086% ( 1) 00:12:23.595 11.994 - 12.041: 98.6162% ( 1) 00:12:23.595 12.089 - 12.136: 98.6238% ( 1) 00:12:23.595 12.136 - 12.231: 98.6314% ( 1) 00:12:23.596 12.231 - 12.326: 98.6390% ( 1) 00:12:23.596 12.421 - 12.516: 98.6542% ( 2) 00:12:23.596 12.516 - 12.610: 98.6694% ( 2) 00:12:23.596 12.610 - 12.705: 98.6770% ( 1) 00:12:23.596 12.705 - 12.800: 98.6922% ( 2) 00:12:23.596 12.895 - 12.990: 98.6998% ( 1) 00:12:23.596 13.369 - 13.464: 98.7074% ( 1) 00:12:23.596 13.559 - 13.653: 98.7226% ( 2) 00:12:23.596 13.653 - 13.748: 98.7302% ( 1) 00:12:23.596 13.748 - 13.843: 98.7378% ( 1) 00:12:23.596 14.033 - 14.127: 98.7454% ( 1) 00:12:23.596 14.222 - 14.317: 98.7530% ( 1) 00:12:23.596 14.507 - 14.601: 98.7606% ( 1) 00:12:23.596 14.601 - 14.696: 98.7759% ( 2) 00:12:23.596 14.696 - 14.791: 98.7835% ( 1) 00:12:23.596 14.981 - 15.076: 98.7911% ( 1) 00:12:23.596 15.360 - 15.455: 98.7987% ( 1) 00:12:23.596 16.972 - 17.067: 98.8063% ( 1) 00:12:23.596 17.161 - 17.256: 98.8139% ( 1) 00:12:23.596 17.256 - 17.351: 98.8215% ( 1) 00:12:23.596 17.351 - 17.446: 98.8443% ( 3) 00:12:23.596 17.446 - 17.541: 98.9051% ( 8) 00:12:23.596 17.541 - 17.636: 98.9507% ( 6) 00:12:23.596 17.636 - 17.730: 99.0116% ( 8) 00:12:23.596 17.730 - 17.825: 99.0344% ( 3) 00:12:23.596 17.825 - 17.920: 99.0572% ( 3) 00:12:23.596 17.920 - 18.015: 99.1104% ( 7) 00:12:23.596 18.015 - 18.110: 99.1256% ( 2) 00:12:23.596 18.110 - 18.204: 99.2092% ( 11) 00:12:23.596 18.204 - 18.299: 99.2929% ( 11) 00:12:23.596 18.299 - 18.394: 99.3689% ( 10) 00:12:23.596 18.394 - 18.489: 99.3917% ( 3) 00:12:23.596 18.489 - 18.584: 99.4678% ( 10) 00:12:23.596 18.584 - 18.679: 99.5134% ( 6) 00:12:23.596 18.679 - 18.773: 99.5818% ( 9) 00:12:23.596 18.773 - 18.868: 99.6046% ( 3) 00:12:23.596 18.868 - 18.963: 99.6350% ( 4) 00:12:23.596 18.963 - 19.058: 99.6502% ( 2) 00:12:23.596 19.058 - 19.153: 99.6731% ( 3) 00:12:23.596 19.153 - 19.247: 99.6807% ( 1) 00:12:23.596 19.247 - 19.342: 99.6883% ( 1) 00:12:23.596 19.342 - 19.437: 99.7035% ( 2) 00:12:23.596 19.437 - 19.532: 99.7187% ( 2) 00:12:23.596 20.290 - 20.385: 99.7263% ( 1) 00:12:23.596 20.480 - 20.575: 99.7339% ( 1) 00:12:23.596 20.575 - 20.670: 99.7491% ( 2) 00:12:23.596 20.670 - 20.764: 99.7567% ( 1) 00:12:23.596 20.764 - 20.859: 99.7643% ( 1) 00:12:23.596 20.859 - 20.954: 99.7719% ( 1) 00:12:23.596 21.713 - 21.807: 99.7795% ( 1) 00:12:23.596 22.187 - 22.281: 99.7871% ( 1) 00:12:23.596 22.471 - 22.566: 99.7947% ( 1) 00:12:23.596 22.945 - 23.040: 99.8023% ( 1) 00:12:23.596 23.514 - 23.609: 99.8175% ( 2) 00:12:23.596 23.988 - 24.083: 99.8251% ( 1) 00:12:23.596 24.462 - 24.652: 99.8327% ( 1) 00:12:23.596 25.979 - 26.169: 99.8403% ( 1) 00:12:23.596 26.738 - 26.927: 99.8479% ( 1) 00:12:23.596 27.496 - 27.686: 99.8555% ( 1) 00:12:23.596 28.634 - 28.824: 99.8631% ( 1) 00:12:23.596 28.824 - 29.013: 99.8707% ( 1) 00:12:23.596 29.393 - 29.582: 99.8783% ( 1) 00:12:23.596 30.151 - 30.341: 99.8859% ( 1) 00:12:23.596 3980.705 - 4004.978: 99.9392% ( 7) 00:12:23.596 4004.978 - 4029.250: 99.9924% ( 7) 00:12:23.596 4029.250 - 4053.523: 100.0000% ( 1) 00:12:23.596 00:12:23.596 Complete histogram 00:12:23.596 ================== 00:12:23.596 Range in us Cumulative Count 00:12:23.596 2.050 - 2.062: 0.0760% ( 10) 00:12:23.596 2.062 - 2.074: 13.6481% ( 1785) 00:12:23.596 2.074 - 2.086: 38.5113% ( 3270) 00:12:23.596 2.086 - 2.098: 40.7923% ( 300) 00:12:23.596 2.098 - 2.110: 49.1864% ( 1104) 00:12:23.596 2.110 - 2.121: 54.8130% ( 740) 00:12:23.596 2.121 - 2.133: 56.5009% ( 222) 00:12:23.596 2.133 - 2.145: 65.0471% ( 1124) 00:12:23.596 2.145 - 2.157: 70.5672% ( 726) 00:12:23.596 2.157 - 2.169: 71.9054% ( 176) 00:12:23.596 2.169 - 2.181: 76.2774% ( 575) 00:12:23.596 2.181 - 2.193: 78.2847% ( 264) 00:12:23.596 2.193 - 2.204: 79.1134% ( 109) 00:12:23.596 2.204 - 2.216: 81.8431% ( 359) 00:12:23.596 2.216 - 2.228: 85.0365% ( 420) 00:12:23.596 2.228 - 2.240: 86.4659% ( 188) 00:12:23.596 2.240 - 2.252: 88.1463% ( 221) 00:12:23.596 2.252 - 2.264: 88.8382% ( 91) 00:12:23.596 2.264 - 2.276: 89.1423% ( 40) 00:12:23.596 2.276 - 2.287: 89.4693% ( 43) 00:12:23.596 2.287 - 2.299: 90.1384% ( 88) 00:12:23.596 2.299 - 2.311: 90.7391% ( 79) 00:12:23.596 2.311 - 2.323: 90.8987% ( 21) 00:12:23.596 2.323 - 2.335: 90.9443% ( 6) 00:12:23.596 2.335 - 2.347: 91.0280% ( 11) 00:12:23.596 2.347 - 2.359: 91.1116% ( 11) 00:12:23.596 2.359 - 2.370: 91.2941% ( 24) 00:12:23.596 2.370 - 2.382: 91.7199% ( 56) 00:12:23.596 2.382 - 2.394: 92.1457% ( 56) 00:12:23.596 2.394 - 2.406: 92.4194% ( 36) 00:12:23.596 2.406 - 2.418: 92.5943% ( 23) 00:12:23.596 2.418 - 2.430: 92.7844% ( 25) 00:12:23.596 2.430 - 2.441: 93.0049% ( 29) 00:12:23.596 2.441 - 2.453: 93.1341% ( 17) 00:12:23.596 2.453 - 2.465: 93.3166% ( 24) 00:12:23.596 2.465 - 2.477: 93.4915% ( 23) 00:12:23.596 2.477 - 2.489: 93.5751% ( 11) 00:12:23.596 2.489 - 2.501: 93.6968% ( 16) 00:12:23.596 2.501 - 2.513: 93.8184% ( 16) 00:12:23.596 2.513 - 2.524: 93.8717% ( 7) 00:12:23.596 2.524 - 2.536: 93.9325% ( 8) 00:12:23.596 2.536 - 2.548: 93.9705% ( 5) 00:12:23.596 2.548 - 2.560: 94.0161% ( 6) 00:12:23.596 2.560 - 2.572: 94.0769% ( 8) 00:12:23.596 2.572 - 2.584: 94.1454% ( 9) 00:12:23.596 2.584 - 2.596: 94.2062% ( 8) 00:12:23.596 2.596 - 2.607: 94.2594% ( 7) 00:12:23.596 2.607 - 2.619: 94.3811% ( 16) 00:12:23.596 2.619 - 2.631: 94.4951% ( 15) 00:12:23.596 2.631 - 2.643: 94.5864% ( 12) 00:12:23.596 2.643 - 2.655: 94.6244% ( 5) 00:12:23.596 2.655 - 2.667: 94.6624% ( 5) 00:12:23.596 2.667 - 2.679: 94.7536% ( 12) 00:12:23.596 2.679 - 2.690: 94.8145% ( 8) 00:12:23.596 2.690 - 2.702: 94.8981% ( 11) 00:12:23.596 2.702 - 2.714: 95.0274% ( 17) 00:12:23.596 2.714 - 2.726: 95.1034% ( 10) 00:12:23.596 2.726 - 2.738: 95.2175% ( 15) 00:12:23.596 2.738 - 2.750: 95.3315% ( 15) 00:12:23.596 2.750 - 2.761: 95.4532% ( 16) 00:12:23.596 2.761 - 2.773: 95.5596% ( 14) 00:12:23.596 2.773 - 2.785: 95.6737% ( 15) 00:12:23.596 2.785 - 2.797: 95.7877% ( 15) 00:12:23.596 2.797 - 2.809: 95.9094% ( 16) 00:12:23.596 2.809 - 2.821: 95.9930% ( 11) 00:12:23.596 2.821 - 2.833: 96.0995% ( 14) 00:12:23.596 2.833 - 2.844: 96.1679% ( 9) 00:12:23.596 2.844 - 2.856: 96.2743% ( 14) 00:12:23.596 2.856 - 2.868: 96.3276% ( 7) 00:12:23.596 2.868 - 2.880: 96.3732% ( 6) 00:12:23.596 2.880 - 2.892: 96.4492% ( 10) 00:12:23.596 2.892 - 2.904: 96.5176% ( 9) 00:12:23.596 2.904 - 2.916: 96.5785% ( 8) 00:12:23.596 2.916 - 2.927: 96.6241% ( 6) 00:12:23.596 2.927 - 2.939: 96.6925% ( 9) 00:12:23.596 2.939 - 2.951: 96.7838% ( 12) 00:12:23.596 2.951 - 2.963: 96.8674% ( 11) 00:12:23.596 2.963 - 2.975: 96.9510% ( 11) 00:12:23.597 2.975 - 2.987: 96.9967% ( 6) 00:12:23.597 2.987 - 2.999: 97.0423% ( 6) 00:12:23.597 2.999 - 3.010: 97.1183% ( 10) 00:12:23.597 3.010 - 3.022: 97.1411% ( 3) 00:12:23.597 3.022 - 3.034: 97.1867% ( 6) 00:12:23.597 3.034 - 3.058: 97.2856% ( 13) 00:12:23.597 3.058 - 3.081: 97.3464% ( 8) 00:12:23.597 3.081 - 3.105: 97.3920% ( 6) 00:12:23.597 3.105 - 3.129: 97.4985% ( 14) 00:12:23.597 3.129 - 3.153: 97.5517% ( 7) 00:12:23.597 3.153 - 3.176: 97.6277% ( 10) 00:12:23.597 3.176 - 3.200: 97.6886% ( 8) 00:12:23.597 3.200 - 3.224: 97.7342% ( 6) 00:12:23.597 3.224 - 3.247: 97.7874% ( 7) 00:12:23.597 3.247 - 3.271: 97.8254% ( 5) 00:12:23.597 3.271 - 3.295: 97.8558% ( 4) 00:12:23.597 3.295 - 3.319: 97.9015% ( 6) 00:12:23.597 3.319 - 3.342: 97.9395% ( 5) 00:12:23.597 3.342 - 3.366: 97.9547% ( 2) 00:12:23.597 3.366 - 3.390: 97.9775% ( 3) 00:12:23.597 3.390 - 3.413: 98.0231% ( 6) 00:12:23.597 3.413 - 3.437: 98.0611% ( 5) 00:12:23.597 3.437 - 3.461: 98.0763% ( 2) 00:12:23.597 3.461 - 3.484: 98.1220% ( 6) 00:12:23.597 3.484 - 3.508: 98.1524% ( 4) 00:12:23.597 3.508 - 3.532: 98.1828% ( 4) 00:12:23.597 3.532 - 3.556: 98.2056% ( 3) 00:12:23.597 3.556 - 3.579: 98.2436% ( 5) 00:12:23.597 3.579 - 3.603: 98.2816% ( 5) 00:12:23.597 3.603 - 3.627: 98.3120% ( 4) 00:12:23.597 3.627 - 3.650: 98.3349% ( 3) 00:12:23.597 3.650 - 3.674: 98.3729% ( 5) 00:12:23.597 3.674 - 3.698: 98.4109% ( 5) 00:12:23.597 3.698 - 3.721: 98.4489% ( 5) 00:12:23.597 3.721 - 3.745: 98.4717% ( 3) 00:12:23.597 3.769 - 3.793: 98.5097% ( 5) 00:12:23.597 3.816 - 3.840: 98.5173% ( 1) 00:12:23.597 3.840 - 3.864: 98.5325% ( 2) 00:12:23.597 3.864 - 3.887: 98.5401% ( 1) 00:12:23.597 3.887 - 3.911: 98.5554% ( 2) 00:12:23.597 3.911 - 3.935: 98.5630% ( 1) 00:12:23.597 3.935 - 3.959: 98.5782% ( 2) 00:12:23.597 3.959 - 3.982: 98.5934% ( 2) 00:12:23.597 3.982 - 4.006: 98.6010% ( 1) 00:12:23.597 4.030 - 4.053: 98.6314% ( 4) 00:12:23.597 4.101 - 4.124: 98.6390% ( 1) 00:12:23.597 4.124 - 4.148: 98.6542% ( 2) 00:12:23.597 4.172 - 4.196: 98.6694% ( 2) 00:12:23.597 4.219 - 4.243: 98.6770% ( 1) 00:12:23.597 4.243 - 4.267: 98.6922% ( 2) 00:12:23.597 4.267 - 4.290: 98.6998% ( 1) 00:12:23.597 4.290 - 4.314: 98.7074% ( 1) 00:12:23.597 4.314 - 4.338: 98.7150% ( 1) 00:12:23.597 4.338 - 4.361: 98.7226% ( 1) 00:12:23.597 4.409 - 4.433: 98.7302% ( 1) 00:12:23.597 4.433 - 4.456: 98.7378% ( 1) 00:12:23.597 4.456 - 4.480: 98.7454% ( 1) 00:12:23.597 4.551 - 4.575: 98.7530% ( 1) 00:12:23.597 4.622 - 4.646: 98.7682% ( 2) 00:12:23.597 4.646 - 4.670: 98.7835% ( 2) 00:12:23.597 4.836 - 4.859: 98.7911% ( 1) 00:12:23.597 5.286 - 5.310: 98.7987% ( 1) 00:12:23.597 5.713 - 5.736: 98.8063% ( 1) 00:12:23.597 6.163 - 6.210: 98.8139% ( 1) 00:12:23.597 6.495 - 6.542: 98.8215% ( 1) 00:12:23.597 6.542 - 6.590: 98.8291% ( 1) 00:12:23.597 6.637 - 6.684: 98.8367% ( 1) 00:12:23.597 6.874 - 6.921: 98.8443% ( 1) 00:12:23.597 6.921 - 6.969: 98.8519% ( 1) 00:12:23.597 7.111 - 7.159: 98.8671% ( 2) 00:12:23.597 7.538 - 7.585: 98.8747% ( 1) 00:12:23.597 7.775 - 7.822: 98.8823% ( 1) 00:12:23.597 7.870 - 7.917: 98.8899% ( 1) 00:12:23.597 8.059 - 8.107: 98.8975% ( 1) 00:12:23.597 9.150 - 9.197: 98.9051% ( 1) 00:12:23.597 9.339 - 9.387: 98.9127% ( 1) 00:12:23.597 10.193 - 10.240: 98.9203% ( 1) 00:12:23.597 10.761 - 10.809: 98.9279% ( 1) 00:12:23.597 11.236 - 11.283: 98.9355% ( 1) 00:12:23.597 12.089 - 12.136: 98.9431% ( 1) 00:12:23.597 13.938 - 14.033: 98.9507% ( 1) 00:12:23.597 15.360 - 15.455: 98.9583% ( 1) 00:12:23.597 15.644 - 15.739: 98.9811% ( 3) 00:12:23.597 15.739 - 15.834: 99.0040% ( 3) 00:12:23.597 15.834 - 15.929: 99.0116% ( 1) 00:12:23.597 15.929 - 16.024: 99.0344% ( 3) 00:12:23.597 16.024 - 16.119: 99.0572% ( 3) 00:12:23.597 16.119 - 16.213: 99.0800% ( 3) 00:12:23.597 16.213 - 16.308: 99.1104% ( 4) 00:12:23.597 16.403 - 16.498: 99.1332% ( 3) 00:12:23.597 16.498 - 16.593: 99.1712% ( 5) 00:12:23.597 16.593 - 16.687: 99.2016% ( 4) 00:12:23.597 16.687 - 16.782: 99.2245% ( 3) 00:12:23.597 16.782 - 16.877: 99.2549% ( 4) 00:12:23.597 16.877 - 16.972: 99.2777% ( 3) 00:12:23.597 16.972 - 17.067: 99.2929% ( 2) 00:12:23.597 17.067 - 17.161: 99.3081% ( 2) 00:12:23.597 17.256 - 17.351: 99.3233% ( 2) 00:12:23.597 17.351 - 17.446: 99.3385% ( 2) 00:12:23.597 17.446 - 17.541: 99.3461% ( 1) 00:12:23.597 17.541 - 17.636: 99.3537% ( 1) 00:12:23.597 17.636 - 17.730: 99.3765% ( 3) 00:12:23.597 17.825 - 17.920: 99.3841% ( 1) 00:12:23.597 17.920 - 18.015: 99.3917% ( 1) 00:12:23.597 18.015 - 18.110: 99.3993% ( 1) 00:12:23.597 18.204 - 18.299: 99.4069% ( 1) 00:12:23.597 18.489 - 18.584: 99.4145% ( 1) 00:12:23.597 22.756 - 22.850: 99.4221% ( [2024-07-25 13:41:20.265987] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:23.597 1) 00:12:23.597 197.215 - 198.732: 99.4297% ( 1) 00:12:23.597 3980.705 - 4004.978: 99.8479% ( 55) 00:12:23.597 4004.978 - 4029.250: 100.0000% ( 20) 00:12:23.597 00:12:23.597 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:23.597 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:23.597 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:23.597 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:23.597 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:23.597 [ 00:12:23.597 { 00:12:23.597 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:23.597 "subtype": "Discovery", 00:12:23.597 "listen_addresses": [], 00:12:23.597 "allow_any_host": true, 00:12:23.597 "hosts": [] 00:12:23.597 }, 00:12:23.597 { 00:12:23.597 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:23.597 "subtype": "NVMe", 00:12:23.597 "listen_addresses": [ 00:12:23.597 { 00:12:23.597 "trtype": "VFIOUSER", 00:12:23.597 "adrfam": "IPv4", 00:12:23.597 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:23.597 "trsvcid": "0" 00:12:23.597 } 00:12:23.597 ], 00:12:23.597 "allow_any_host": true, 00:12:23.597 "hosts": [], 00:12:23.597 "serial_number": "SPDK1", 00:12:23.597 "model_number": "SPDK bdev Controller", 00:12:23.597 "max_namespaces": 32, 00:12:23.597 "min_cntlid": 1, 00:12:23.597 "max_cntlid": 65519, 00:12:23.597 "namespaces": [ 00:12:23.597 { 00:12:23.597 "nsid": 1, 00:12:23.597 "bdev_name": "Malloc1", 00:12:23.597 "name": "Malloc1", 00:12:23.597 "nguid": "2D1F10C6A7684D3EA661A3B2D98E2921", 00:12:23.597 "uuid": "2d1f10c6-a768-4d3e-a661-a3b2d98e2921" 00:12:23.597 }, 00:12:23.597 { 00:12:23.597 "nsid": 2, 00:12:23.597 "bdev_name": "Malloc3", 00:12:23.597 "name": "Malloc3", 00:12:23.597 "nguid": "710F643980F444D0BDB06F96AB65DFAA", 00:12:23.597 "uuid": "710f6439-80f4-44d0-bdb0-6f96ab65dfaa" 00:12:23.597 } 00:12:23.597 ] 00:12:23.597 }, 00:12:23.597 { 00:12:23.597 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:23.597 "subtype": "NVMe", 00:12:23.597 "listen_addresses": [ 00:12:23.597 { 00:12:23.597 "trtype": "VFIOUSER", 00:12:23.597 "adrfam": "IPv4", 00:12:23.597 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:23.597 "trsvcid": "0" 00:12:23.597 } 00:12:23.597 ], 00:12:23.597 "allow_any_host": true, 00:12:23.597 "hosts": [], 00:12:23.597 "serial_number": "SPDK2", 00:12:23.597 "model_number": "SPDK bdev Controller", 00:12:23.597 "max_namespaces": 32, 00:12:23.597 "min_cntlid": 1, 00:12:23.597 "max_cntlid": 65519, 00:12:23.597 "namespaces": [ 00:12:23.598 { 00:12:23.598 "nsid": 1, 00:12:23.598 "bdev_name": "Malloc2", 00:12:23.598 "name": "Malloc2", 00:12:23.598 "nguid": "EAF854D966BE40D399F792CF032A686F", 00:12:23.598 "uuid": "eaf854d9-66be-40d3-99f7-92cf032a686f" 00:12:23.598 } 00:12:23.598 ] 00:12:23.598 } 00:12:23.598 ] 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=544520 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:23.598 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:23.598 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.856 [2024-07-25 13:41:20.719558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:23.856 Malloc4 00:12:23.856 13:41:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:24.113 [2024-07-25 13:41:21.081289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:24.113 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:24.113 Asynchronous Event Request test 00:12:24.113 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:24.113 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:24.113 Registering asynchronous event callbacks... 00:12:24.113 Starting namespace attribute notice tests for all controllers... 00:12:24.113 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:24.113 aer_cb - Changed Namespace 00:12:24.113 Cleaning up... 00:12:24.370 [ 00:12:24.370 { 00:12:24.370 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:24.370 "subtype": "Discovery", 00:12:24.370 "listen_addresses": [], 00:12:24.370 "allow_any_host": true, 00:12:24.370 "hosts": [] 00:12:24.370 }, 00:12:24.370 { 00:12:24.370 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:24.370 "subtype": "NVMe", 00:12:24.370 "listen_addresses": [ 00:12:24.370 { 00:12:24.370 "trtype": "VFIOUSER", 00:12:24.370 "adrfam": "IPv4", 00:12:24.370 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:24.370 "trsvcid": "0" 00:12:24.370 } 00:12:24.370 ], 00:12:24.370 "allow_any_host": true, 00:12:24.370 "hosts": [], 00:12:24.370 "serial_number": "SPDK1", 00:12:24.370 "model_number": "SPDK bdev Controller", 00:12:24.370 "max_namespaces": 32, 00:12:24.370 "min_cntlid": 1, 00:12:24.370 "max_cntlid": 65519, 00:12:24.370 "namespaces": [ 00:12:24.370 { 00:12:24.370 "nsid": 1, 00:12:24.370 "bdev_name": "Malloc1", 00:12:24.370 "name": "Malloc1", 00:12:24.370 "nguid": "2D1F10C6A7684D3EA661A3B2D98E2921", 00:12:24.370 "uuid": "2d1f10c6-a768-4d3e-a661-a3b2d98e2921" 00:12:24.370 }, 00:12:24.370 { 00:12:24.370 "nsid": 2, 00:12:24.370 "bdev_name": "Malloc3", 00:12:24.370 "name": "Malloc3", 00:12:24.370 "nguid": "710F643980F444D0BDB06F96AB65DFAA", 00:12:24.370 "uuid": "710f6439-80f4-44d0-bdb0-6f96ab65dfaa" 00:12:24.370 } 00:12:24.370 ] 00:12:24.370 }, 00:12:24.370 { 00:12:24.370 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:24.370 "subtype": "NVMe", 00:12:24.370 "listen_addresses": [ 00:12:24.370 { 00:12:24.370 "trtype": "VFIOUSER", 00:12:24.370 "adrfam": "IPv4", 00:12:24.370 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:24.370 "trsvcid": "0" 00:12:24.370 } 00:12:24.370 ], 00:12:24.370 "allow_any_host": true, 00:12:24.370 "hosts": [], 00:12:24.370 "serial_number": "SPDK2", 00:12:24.370 "model_number": "SPDK bdev Controller", 00:12:24.371 "max_namespaces": 32, 00:12:24.371 "min_cntlid": 1, 00:12:24.371 "max_cntlid": 65519, 00:12:24.371 "namespaces": [ 00:12:24.371 { 00:12:24.371 "nsid": 1, 00:12:24.371 "bdev_name": "Malloc2", 00:12:24.371 "name": "Malloc2", 00:12:24.371 "nguid": "EAF854D966BE40D399F792CF032A686F", 00:12:24.371 "uuid": "eaf854d9-66be-40d3-99f7-92cf032a686f" 00:12:24.371 }, 00:12:24.371 { 00:12:24.371 "nsid": 2, 00:12:24.371 "bdev_name": "Malloc4", 00:12:24.371 "name": "Malloc4", 00:12:24.371 "nguid": "31D761F9BF3F433AA2FECDE4A53D0F72", 00:12:24.371 "uuid": "31d761f9-bf3f-433a-a2fe-cde4a53d0f72" 00:12:24.371 } 00:12:24.371 ] 00:12:24.371 } 00:12:24.371 ] 00:12:24.371 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 544520 00:12:24.371 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:24.371 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 538986 00:12:24.371 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 538986 ']' 00:12:24.371 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 538986 00:12:24.371 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:24.371 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.371 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 538986 00:12:24.630 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:24.630 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:24.630 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 538986' 00:12:24.630 killing process with pid 538986 00:12:24.630 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 538986 00:12:24.630 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 538986 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=544662 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 544662' 00:12:24.888 Process pid: 544662 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 544662 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 544662 ']' 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:24.888 13:41:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:24.888 [2024-07-25 13:41:21.844752] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:24.888 [2024-07-25 13:41:21.845746] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:24.888 [2024-07-25 13:41:21.845803] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.888 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.888 [2024-07-25 13:41:21.904949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.146 [2024-07-25 13:41:22.016716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.146 [2024-07-25 13:41:22.016772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.146 [2024-07-25 13:41:22.016800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.146 [2024-07-25 13:41:22.016811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.146 [2024-07-25 13:41:22.016820] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.146 [2024-07-25 13:41:22.016915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.147 [2024-07-25 13:41:22.016997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.147 [2024-07-25 13:41:22.017126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.147 [2024-07-25 13:41:22.017131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.147 [2024-07-25 13:41:22.121538] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:25.147 [2024-07-25 13:41:22.121766] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:25.147 [2024-07-25 13:41:22.122094] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:25.147 [2024-07-25 13:41:22.122653] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:25.147 [2024-07-25 13:41:22.122898] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:25.147 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.147 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:12:25.147 13:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:26.165 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:26.429 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:26.429 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:26.429 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:26.429 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:26.429 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:26.993 Malloc1 00:12:26.993 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:26.993 13:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:27.251 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:27.508 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.508 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:27.508 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:27.765 Malloc2 00:12:27.765 13:41:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:28.022 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:28.279 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 544662 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 544662 ']' 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 544662 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544662 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544662' 00:12:28.539 killing process with pid 544662 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 544662 00:12:28.539 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 544662 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:29.107 00:12:29.107 real 0m52.800s 00:12:29.107 user 3m28.300s 00:12:29.107 sys 0m4.290s 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:29.107 ************************************ 00:12:29.107 END TEST nvmf_vfio_user 00:12:29.107 ************************************ 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.107 ************************************ 00:12:29.107 START TEST nvmf_vfio_user_nvme_compliance 00:12:29.107 ************************************ 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:29.107 * Looking for test storage... 00:12:29.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=545257 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 545257' 00:12:29.107 Process pid: 545257 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 545257 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 545257 ']' 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.107 13:41:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:29.107 [2024-07-25 13:41:26.032243] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:29.107 [2024-07-25 13:41:26.032320] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.108 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.108 [2024-07-25 13:41:26.091540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:29.365 [2024-07-25 13:41:26.203453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.365 [2024-07-25 13:41:26.203502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.365 [2024-07-25 13:41:26.203515] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.365 [2024-07-25 13:41:26.203526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.365 [2024-07-25 13:41:26.203535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.365 [2024-07-25 13:41:26.203608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.365 [2024-07-25 13:41:26.203635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.365 [2024-07-25 13:41:26.203638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.365 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.365 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:12:29.365 13:41:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:30.301 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:30.301 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:30.301 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:30.301 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.301 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.301 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.301 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.560 malloc0 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.560 13:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:30.560 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.560 00:12:30.560 00:12:30.560 CUnit - A unit testing framework for C - Version 2.1-3 00:12:30.560 http://cunit.sourceforge.net/ 00:12:30.560 00:12:30.560 00:12:30.560 Suite: nvme_compliance 00:12:30.560 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 13:41:27.553766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.560 [2024-07-25 13:41:27.555284] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:30.560 [2024-07-25 13:41:27.555311] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:30.560 [2024-07-25 13:41:27.555332] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:30.560 [2024-07-25 13:41:27.559797] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.560 passed 00:12:30.818 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 13:41:27.643419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.818 [2024-07-25 13:41:27.646436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.818 passed 00:12:30.818 Test: admin_identify_ns ...[2024-07-25 13:41:27.733541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.818 [2024-07-25 13:41:27.794120] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:30.818 [2024-07-25 13:41:27.802077] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:30.818 [2024-07-25 13:41:27.823198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.076 passed 00:12:31.076 Test: admin_get_features_mandatory_features ...[2024-07-25 13:41:27.906964] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.076 [2024-07-25 13:41:27.909982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.076 passed 00:12:31.076 Test: admin_get_features_optional_features ...[2024-07-25 13:41:27.994543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.076 [2024-07-25 13:41:27.997562] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.076 passed 00:12:31.076 Test: admin_set_features_number_of_queues ...[2024-07-25 13:41:28.080565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.333 [2024-07-25 13:41:28.186176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.333 passed 00:12:31.333 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 13:41:28.265656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.333 [2024-07-25 13:41:28.268682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.333 passed 00:12:31.333 Test: admin_get_log_page_with_lpo ...[2024-07-25 13:41:28.351804] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.591 [2024-07-25 13:41:28.418074] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:31.591 [2024-07-25 13:41:28.431153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.591 passed 00:12:31.591 Test: fabric_property_get ...[2024-07-25 13:41:28.514835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.591 [2024-07-25 13:41:28.516147] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:31.591 [2024-07-25 13:41:28.517857] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.591 passed 00:12:31.591 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 13:41:28.598382] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.591 [2024-07-25 13:41:28.599668] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:31.591 [2024-07-25 13:41:28.601415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.850 passed 00:12:31.850 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 13:41:28.687556] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.850 [2024-07-25 13:41:28.772074] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:31.850 [2024-07-25 13:41:28.788071] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:31.850 [2024-07-25 13:41:28.793176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:31.850 passed 00:12:31.850 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 13:41:28.875694] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:31.850 [2024-07-25 13:41:28.876967] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:31.850 [2024-07-25 13:41:28.878719] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.110 passed 00:12:32.110 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 13:41:28.958521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.110 [2024-07-25 13:41:29.034073] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:32.110 [2024-07-25 13:41:29.058073] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:32.110 [2024-07-25 13:41:29.063181] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.110 passed 00:12:32.369 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 13:41:29.149301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.369 [2024-07-25 13:41:29.150588] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:32.369 [2024-07-25 13:41:29.150624] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:32.369 [2024-07-25 13:41:29.152324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.369 passed 00:12:32.369 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 13:41:29.234596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.369 [2024-07-25 13:41:29.327072] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:32.369 [2024-07-25 13:41:29.335073] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:32.369 [2024-07-25 13:41:29.343073] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:32.369 [2024-07-25 13:41:29.351069] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:32.369 [2024-07-25 13:41:29.380167] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.627 passed 00:12:32.627 Test: admin_create_io_sq_verify_pc ...[2024-07-25 13:41:29.462054] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:32.627 [2024-07-25 13:41:29.482095] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:32.627 [2024-07-25 13:41:29.499369] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:32.627 passed 00:12:32.627 Test: admin_create_io_qp_max_qps ...[2024-07-25 13:41:29.581942] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.058 [2024-07-25 13:41:30.694080] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:34.058 [2024-07-25 13:41:31.078187] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.317 passed 00:12:34.317 Test: admin_create_io_sq_shared_cq ...[2024-07-25 13:41:31.163469] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:34.317 [2024-07-25 13:41:31.295085] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:34.317 [2024-07-25 13:41:31.332158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:34.577 passed 00:12:34.577 00:12:34.577 Run Summary: Type Total Ran Passed Failed Inactive 00:12:34.577 suites 1 1 n/a 0 0 00:12:34.577 tests 18 18 18 0 0 00:12:34.577 asserts 360 360 360 0 n/a 00:12:34.577 00:12:34.577 Elapsed time = 1.566 seconds 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 545257 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 545257 ']' 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 545257 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 545257 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 545257' 00:12:34.577 killing process with pid 545257 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 545257 00:12:34.577 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 545257 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:34.836 00:12:34.836 real 0m5.796s 00:12:34.836 user 0m16.232s 00:12:34.836 sys 0m0.555s 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:34.836 ************************************ 00:12:34.836 END TEST nvmf_vfio_user_nvme_compliance 00:12:34.836 ************************************ 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.836 ************************************ 00:12:34.836 START TEST nvmf_vfio_user_fuzz 00:12:34.836 ************************************ 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:34.836 * Looking for test storage... 00:12:34.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.836 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=545982 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 545982' 00:12:34.837 Process pid: 545982 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 545982 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 545982 ']' 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.837 13:41:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:35.404 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.404 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:12:35.404 13:41:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:36.343 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:36.343 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.343 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.343 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.343 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:36.343 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:36.343 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.343 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.343 malloc0 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:36.344 13:41:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:08.403 Fuzzing completed. Shutting down the fuzz application 00:13:08.403 00:13:08.403 Dumping successful admin opcodes: 00:13:08.403 8, 9, 10, 24, 00:13:08.403 Dumping successful io opcodes: 00:13:08.403 0, 00:13:08.403 NS: 0x200003a1ef00 I/O qp, Total commands completed: 614832, total successful commands: 2374, random_seed: 167931456 00:13:08.403 NS: 0x200003a1ef00 admin qp, Total commands completed: 138417, total successful commands: 1120, random_seed: 1901701184 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 545982 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 545982 ']' 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 545982 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 545982 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 545982' 00:13:08.403 killing process with pid 545982 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 545982 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 545982 00:13:08.403 13:42:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:08.403 00:13:08.403 real 0m32.290s 00:13:08.403 user 0m29.691s 00:13:08.403 sys 0m29.239s 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:08.403 ************************************ 00:13:08.403 END TEST nvmf_vfio_user_fuzz 00:13:08.403 ************************************ 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.403 ************************************ 00:13:08.403 START TEST nvmf_auth_target 00:13:08.403 ************************************ 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:08.403 * Looking for test storage... 00:13:08.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.403 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.404 13:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:09.340 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.340 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:09.341 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:09.341 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:09.341 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.341 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:13:09.600 00:13:09.600 --- 10.0.0.2 ping statistics --- 00:13:09.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.600 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:13:09.600 00:13:09.600 --- 10.0.0.1 ping statistics --- 00:13:09.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.600 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=551546 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 551546 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 551546 ']' 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.600 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=551681 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fb9ac0e7ba522d08b82a8a6184faacd375b6a285929228a4 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OmS 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fb9ac0e7ba522d08b82a8a6184faacd375b6a285929228a4 0 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fb9ac0e7ba522d08b82a8a6184faacd375b6a285929228a4 0 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fb9ac0e7ba522d08b82a8a6184faacd375b6a285929228a4 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OmS 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OmS 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.OmS 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f4e8476407fb3549376c6bf0b3b0ac2f25b5181bccfcdd2b45a3c6c52b4d3f1b 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7F8 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f4e8476407fb3549376c6bf0b3b0ac2f25b5181bccfcdd2b45a3c6c52b4d3f1b 3 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f4e8476407fb3549376c6bf0b3b0ac2f25b5181bccfcdd2b45a3c6c52b4d3f1b 3 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f4e8476407fb3549376c6bf0b3b0ac2f25b5181bccfcdd2b45a3c6c52b4d3f1b 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7F8 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7F8 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.7F8 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=22dd612d604b68962e2b8a688f00675c 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.x7P 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 22dd612d604b68962e2b8a688f00675c 1 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 22dd612d604b68962e2b8a688f00675c 1 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=22dd612d604b68962e2b8a688f00675c 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:09.859 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.x7P 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.x7P 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.x7P 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c62fa04cec607c8143ebd64042f2450784cd7855e2d5d643 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Jzt 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c62fa04cec607c8143ebd64042f2450784cd7855e2d5d643 2 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c62fa04cec607c8143ebd64042f2450784cd7855e2d5d643 2 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c62fa04cec607c8143ebd64042f2450784cd7855e2d5d643 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Jzt 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Jzt 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Jzt 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f915b1bfed80adaf919e12f13798879c216d674fb73f1821 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.odU 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f915b1bfed80adaf919e12f13798879c216d674fb73f1821 2 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f915b1bfed80adaf919e12f13798879c216d674fb73f1821 2 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f915b1bfed80adaf919e12f13798879c216d674fb73f1821 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:10.118 13:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.odU 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.odU 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.odU 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b18a40fa64c0ac05cdd3a684b8da11e1 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wNE 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b18a40fa64c0ac05cdd3a684b8da11e1 1 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b18a40fa64c0ac05cdd3a684b8da11e1 1 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b18a40fa64c0ac05cdd3a684b8da11e1 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wNE 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wNE 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.wNE 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a8da00afcc700f034152d86e614bcb5e3afdaeefd43b23f5e602d4c586a17fd2 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Ch0 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a8da00afcc700f034152d86e614bcb5e3afdaeefd43b23f5e602d4c586a17fd2 3 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a8da00afcc700f034152d86e614bcb5e3afdaeefd43b23f5e602d4c586a17fd2 3 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a8da00afcc700f034152d86e614bcb5e3afdaeefd43b23f5e602d4c586a17fd2 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Ch0 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Ch0 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Ch0 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 551546 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 551546 ']' 00:13:10.118 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.119 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.119 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.119 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.119 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.377 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.377 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:10.377 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 551681 /var/tmp/host.sock 00:13:10.377 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 551681 ']' 00:13:10.377 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:10.377 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.377 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:10.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:10.377 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.377 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.635 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.635 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:10.635 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:10.635 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.635 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:10.893 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.OmS 00:13:10.893 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.893 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.893 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.893 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.OmS 00:13:10.893 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.OmS 00:13:11.151 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.7F8 ]] 00:13:11.151 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7F8 00:13:11.151 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.151 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.151 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.151 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7F8 00:13:11.151 13:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7F8 00:13:11.409 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:11.409 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.x7P 00:13:11.409 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.409 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.409 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.409 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.x7P 00:13:11.409 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.x7P 00:13:11.667 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Jzt ]] 00:13:11.667 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Jzt 00:13:11.667 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.667 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.667 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.667 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Jzt 00:13:11.667 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Jzt 00:13:11.924 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:11.924 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.odU 00:13:11.924 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.924 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.924 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.924 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.odU 00:13:11.924 13:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.odU 00:13:12.182 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.wNE ]] 00:13:12.182 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wNE 00:13:12.182 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.182 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.182 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.182 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wNE 00:13:12.182 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wNE 00:13:12.440 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:12.440 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ch0 00:13:12.440 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.440 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.440 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.440 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Ch0 00:13:12.440 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Ch0 00:13:12.698 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:12.698 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:12.698 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.698 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.698 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:12.698 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.955 13:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.213 00:13:13.213 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.213 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.213 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.470 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.470 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.470 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.470 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.470 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.470 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.470 { 00:13:13.470 "cntlid": 1, 00:13:13.470 "qid": 0, 00:13:13.470 "state": "enabled", 00:13:13.470 "thread": "nvmf_tgt_poll_group_000", 00:13:13.470 "listen_address": { 00:13:13.470 "trtype": "TCP", 00:13:13.470 "adrfam": "IPv4", 00:13:13.470 "traddr": "10.0.0.2", 00:13:13.470 "trsvcid": "4420" 00:13:13.470 }, 00:13:13.470 "peer_address": { 00:13:13.470 "trtype": "TCP", 00:13:13.470 "adrfam": "IPv4", 00:13:13.470 "traddr": "10.0.0.1", 00:13:13.470 "trsvcid": "44074" 00:13:13.470 }, 00:13:13.470 "auth": { 00:13:13.470 "state": "completed", 00:13:13.470 "digest": "sha256", 00:13:13.470 "dhgroup": "null" 00:13:13.470 } 00:13:13.470 } 00:13:13.470 ]' 00:13:13.470 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.471 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.471 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.471 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:13.471 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.471 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.471 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.471 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.734 13:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:13:14.715 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.715 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.715 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.715 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.715 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.715 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:14.715 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:14.715 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.973 13:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.231 00:13:15.231 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.231 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.231 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.489 { 00:13:15.489 "cntlid": 3, 00:13:15.489 "qid": 0, 00:13:15.489 "state": "enabled", 00:13:15.489 "thread": "nvmf_tgt_poll_group_000", 00:13:15.489 "listen_address": { 00:13:15.489 "trtype": "TCP", 00:13:15.489 "adrfam": "IPv4", 00:13:15.489 "traddr": "10.0.0.2", 00:13:15.489 "trsvcid": "4420" 00:13:15.489 }, 00:13:15.489 "peer_address": { 00:13:15.489 "trtype": "TCP", 00:13:15.489 "adrfam": "IPv4", 00:13:15.489 "traddr": "10.0.0.1", 00:13:15.489 "trsvcid": "44118" 00:13:15.489 }, 00:13:15.489 "auth": { 00:13:15.489 "state": "completed", 00:13:15.489 "digest": "sha256", 00:13:15.489 "dhgroup": "null" 00:13:15.489 } 00:13:15.489 } 00:13:15.489 ]' 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:15.489 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.749 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.749 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.749 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.007 13:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.942 13:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.509 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.509 { 00:13:17.509 "cntlid": 5, 00:13:17.509 "qid": 0, 00:13:17.509 "state": "enabled", 00:13:17.509 "thread": "nvmf_tgt_poll_group_000", 00:13:17.509 "listen_address": { 00:13:17.509 "trtype": "TCP", 00:13:17.509 "adrfam": "IPv4", 00:13:17.509 "traddr": "10.0.0.2", 00:13:17.509 "trsvcid": "4420" 00:13:17.509 }, 00:13:17.509 "peer_address": { 00:13:17.509 "trtype": "TCP", 00:13:17.509 "adrfam": "IPv4", 00:13:17.509 "traddr": "10.0.0.1", 00:13:17.509 "trsvcid": "44154" 00:13:17.509 }, 00:13:17.509 "auth": { 00:13:17.509 "state": "completed", 00:13:17.509 "digest": "sha256", 00:13:17.509 "dhgroup": "null" 00:13:17.509 } 00:13:17.509 } 00:13:17.509 ]' 00:13:17.509 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.767 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.767 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.767 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:17.767 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.767 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.767 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.767 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.024 13:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:13:18.958 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.958 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.958 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.958 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.958 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.958 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.958 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:18.958 13:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.215 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.216 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:19.216 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:19.473 00:13:19.473 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.473 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.473 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.730 { 00:13:19.730 "cntlid": 7, 00:13:19.730 "qid": 0, 00:13:19.730 "state": "enabled", 00:13:19.730 "thread": "nvmf_tgt_poll_group_000", 00:13:19.730 "listen_address": { 00:13:19.730 "trtype": "TCP", 00:13:19.730 "adrfam": "IPv4", 00:13:19.730 "traddr": "10.0.0.2", 00:13:19.730 "trsvcid": "4420" 00:13:19.730 }, 00:13:19.730 "peer_address": { 00:13:19.730 "trtype": "TCP", 00:13:19.730 "adrfam": "IPv4", 00:13:19.730 "traddr": "10.0.0.1", 00:13:19.730 "trsvcid": "44188" 00:13:19.730 }, 00:13:19.730 "auth": { 00:13:19.730 "state": "completed", 00:13:19.730 "digest": "sha256", 00:13:19.730 "dhgroup": "null" 00:13:19.730 } 00:13:19.730 } 00:13:19.730 ]' 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.730 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.987 13:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:13:20.918 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.918 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.918 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.918 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.918 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.918 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.918 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.918 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:20.918 13:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.176 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.435 00:13:21.694 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.694 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.694 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.694 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.951 { 00:13:21.951 "cntlid": 9, 00:13:21.951 "qid": 0, 00:13:21.951 "state": "enabled", 00:13:21.951 "thread": "nvmf_tgt_poll_group_000", 00:13:21.951 "listen_address": { 00:13:21.951 "trtype": "TCP", 00:13:21.951 "adrfam": "IPv4", 00:13:21.951 "traddr": "10.0.0.2", 00:13:21.951 "trsvcid": "4420" 00:13:21.951 }, 00:13:21.951 "peer_address": { 00:13:21.951 "trtype": "TCP", 00:13:21.951 "adrfam": "IPv4", 00:13:21.951 "traddr": "10.0.0.1", 00:13:21.951 "trsvcid": "44222" 00:13:21.951 }, 00:13:21.951 "auth": { 00:13:21.951 "state": "completed", 00:13:21.951 "digest": "sha256", 00:13:21.951 "dhgroup": "ffdhe2048" 00:13:21.951 } 00:13:21.951 } 00:13:21.951 ]' 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.951 13:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.208 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:13:23.143 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.143 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.143 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.143 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.143 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.143 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.143 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:23.143 13:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.402 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.660 00:13:23.660 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.660 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.660 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.917 { 00:13:23.917 "cntlid": 11, 00:13:23.917 "qid": 0, 00:13:23.917 "state": "enabled", 00:13:23.917 "thread": "nvmf_tgt_poll_group_000", 00:13:23.917 "listen_address": { 00:13:23.917 "trtype": "TCP", 00:13:23.917 "adrfam": "IPv4", 00:13:23.917 "traddr": "10.0.0.2", 00:13:23.917 "trsvcid": "4420" 00:13:23.917 }, 00:13:23.917 "peer_address": { 00:13:23.917 "trtype": "TCP", 00:13:23.917 "adrfam": "IPv4", 00:13:23.917 "traddr": "10.0.0.1", 00:13:23.917 "trsvcid": "52832" 00:13:23.917 }, 00:13:23.917 "auth": { 00:13:23.917 "state": "completed", 00:13:23.917 "digest": "sha256", 00:13:23.917 "dhgroup": "ffdhe2048" 00:13:23.917 } 00:13:23.917 } 00:13:23.917 ]' 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.917 13:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.176 13:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:13:25.108 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.108 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.108 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.108 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.108 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.108 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.108 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:25.108 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.366 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.624 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:25.881 { 00:13:25.881 "cntlid": 13, 00:13:25.881 "qid": 0, 00:13:25.881 "state": "enabled", 00:13:25.881 "thread": "nvmf_tgt_poll_group_000", 00:13:25.881 "listen_address": { 00:13:25.881 "trtype": "TCP", 00:13:25.881 "adrfam": "IPv4", 00:13:25.881 "traddr": "10.0.0.2", 00:13:25.881 "trsvcid": "4420" 00:13:25.881 }, 00:13:25.881 "peer_address": { 00:13:25.881 "trtype": "TCP", 00:13:25.881 "adrfam": "IPv4", 00:13:25.881 "traddr": "10.0.0.1", 00:13:25.881 "trsvcid": "52874" 00:13:25.881 }, 00:13:25.881 "auth": { 00:13:25.881 "state": "completed", 00:13:25.881 "digest": "sha256", 00:13:25.881 "dhgroup": "ffdhe2048" 00:13:25.881 } 00:13:25.881 } 00:13:25.881 ]' 00:13:25.881 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.139 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.139 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.139 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:26.139 13:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.139 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.139 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.139 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.396 13:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:13:27.330 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.330 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:27.330 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.330 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.330 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.330 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.330 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:27.330 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.588 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.846 00:13:27.847 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.847 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.847 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.105 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.105 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.105 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.105 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.105 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.105 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.105 { 00:13:28.105 "cntlid": 15, 00:13:28.105 "qid": 0, 00:13:28.105 "state": "enabled", 00:13:28.105 "thread": "nvmf_tgt_poll_group_000", 00:13:28.105 "listen_address": { 00:13:28.105 "trtype": "TCP", 00:13:28.105 "adrfam": "IPv4", 00:13:28.105 "traddr": "10.0.0.2", 00:13:28.105 "trsvcid": "4420" 00:13:28.105 }, 00:13:28.105 "peer_address": { 00:13:28.105 "trtype": "TCP", 00:13:28.105 "adrfam": "IPv4", 00:13:28.105 "traddr": "10.0.0.1", 00:13:28.105 "trsvcid": "52896" 00:13:28.105 }, 00:13:28.105 "auth": { 00:13:28.105 "state": "completed", 00:13:28.105 "digest": "sha256", 00:13:28.105 "dhgroup": "ffdhe2048" 00:13:28.105 } 00:13:28.105 } 00:13:28.105 ]' 00:13:28.105 13:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.105 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.105 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.105 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:28.105 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.105 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.105 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.105 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.364 13:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:13:29.299 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.299 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:29.299 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.299 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.299 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.299 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.299 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.299 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:29.299 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.557 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.816 00:13:29.816 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.816 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.816 13:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.382 { 00:13:30.382 "cntlid": 17, 00:13:30.382 "qid": 0, 00:13:30.382 "state": "enabled", 00:13:30.382 "thread": "nvmf_tgt_poll_group_000", 00:13:30.382 "listen_address": { 00:13:30.382 "trtype": "TCP", 00:13:30.382 "adrfam": "IPv4", 00:13:30.382 "traddr": "10.0.0.2", 00:13:30.382 "trsvcid": "4420" 00:13:30.382 }, 00:13:30.382 "peer_address": { 00:13:30.382 "trtype": "TCP", 00:13:30.382 "adrfam": "IPv4", 00:13:30.382 "traddr": "10.0.0.1", 00:13:30.382 "trsvcid": "52918" 00:13:30.382 }, 00:13:30.382 "auth": { 00:13:30.382 "state": "completed", 00:13:30.382 "digest": "sha256", 00:13:30.382 "dhgroup": "ffdhe3072" 00:13:30.382 } 00:13:30.382 } 00:13:30.382 ]' 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.382 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.641 13:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:13:31.578 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.578 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.578 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.578 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.578 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.578 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.578 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.578 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.837 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:31.837 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.837 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:31.837 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:31.837 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:31.838 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.838 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.838 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.838 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.838 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.838 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.838 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.096 00:13:32.096 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.096 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.096 13:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.354 { 00:13:32.354 "cntlid": 19, 00:13:32.354 "qid": 0, 00:13:32.354 "state": "enabled", 00:13:32.354 "thread": "nvmf_tgt_poll_group_000", 00:13:32.354 "listen_address": { 00:13:32.354 "trtype": "TCP", 00:13:32.354 "adrfam": "IPv4", 00:13:32.354 "traddr": "10.0.0.2", 00:13:32.354 "trsvcid": "4420" 00:13:32.354 }, 00:13:32.354 "peer_address": { 00:13:32.354 "trtype": "TCP", 00:13:32.354 "adrfam": "IPv4", 00:13:32.354 "traddr": "10.0.0.1", 00:13:32.354 "trsvcid": "52936" 00:13:32.354 }, 00:13:32.354 "auth": { 00:13:32.354 "state": "completed", 00:13:32.354 "digest": "sha256", 00:13:32.354 "dhgroup": "ffdhe3072" 00:13:32.354 } 00:13:32.354 } 00:13:32.354 ]' 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.354 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.614 13:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:13:33.547 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.547 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:33.547 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.547 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.547 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.547 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.547 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:33.547 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.805 13:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.063 00:13:34.320 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.320 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.320 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.320 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.579 { 00:13:34.579 "cntlid": 21, 00:13:34.579 "qid": 0, 00:13:34.579 "state": "enabled", 00:13:34.579 "thread": "nvmf_tgt_poll_group_000", 00:13:34.579 "listen_address": { 00:13:34.579 "trtype": "TCP", 00:13:34.579 "adrfam": "IPv4", 00:13:34.579 "traddr": "10.0.0.2", 00:13:34.579 "trsvcid": "4420" 00:13:34.579 }, 00:13:34.579 "peer_address": { 00:13:34.579 "trtype": "TCP", 00:13:34.579 "adrfam": "IPv4", 00:13:34.579 "traddr": "10.0.0.1", 00:13:34.579 "trsvcid": "37214" 00:13:34.579 }, 00:13:34.579 "auth": { 00:13:34.579 "state": "completed", 00:13:34.579 "digest": "sha256", 00:13:34.579 "dhgroup": "ffdhe3072" 00:13:34.579 } 00:13:34.579 } 00:13:34.579 ]' 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.579 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.873 13:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.831 13:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.398 00:13:36.398 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.398 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.398 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.398 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.398 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.398 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.398 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.398 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.398 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.398 { 00:13:36.398 "cntlid": 23, 00:13:36.398 "qid": 0, 00:13:36.398 "state": "enabled", 00:13:36.398 "thread": "nvmf_tgt_poll_group_000", 00:13:36.398 "listen_address": { 00:13:36.398 "trtype": "TCP", 00:13:36.398 "adrfam": "IPv4", 00:13:36.398 "traddr": "10.0.0.2", 00:13:36.398 "trsvcid": "4420" 00:13:36.398 }, 00:13:36.398 "peer_address": { 00:13:36.398 "trtype": "TCP", 00:13:36.398 "adrfam": "IPv4", 00:13:36.398 "traddr": "10.0.0.1", 00:13:36.398 "trsvcid": "37240" 00:13:36.398 }, 00:13:36.398 "auth": { 00:13:36.398 "state": "completed", 00:13:36.398 "digest": "sha256", 00:13:36.398 "dhgroup": "ffdhe3072" 00:13:36.398 } 00:13:36.398 } 00:13:36.398 ]' 00:13:36.659 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.659 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:36.659 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.659 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:36.659 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.659 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.659 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.659 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.916 13:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:13:37.853 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.853 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:37.853 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.853 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.853 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.853 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.853 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.853 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:37.853 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.112 13:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.370 00:13:38.370 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.370 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.370 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.628 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.628 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.628 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.628 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.628 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.628 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.628 { 00:13:38.628 "cntlid": 25, 00:13:38.628 "qid": 0, 00:13:38.628 "state": "enabled", 00:13:38.628 "thread": "nvmf_tgt_poll_group_000", 00:13:38.628 "listen_address": { 00:13:38.628 "trtype": "TCP", 00:13:38.628 "adrfam": "IPv4", 00:13:38.628 "traddr": "10.0.0.2", 00:13:38.628 "trsvcid": "4420" 00:13:38.628 }, 00:13:38.628 "peer_address": { 00:13:38.628 "trtype": "TCP", 00:13:38.628 "adrfam": "IPv4", 00:13:38.628 "traddr": "10.0.0.1", 00:13:38.628 "trsvcid": "37274" 00:13:38.628 }, 00:13:38.628 "auth": { 00:13:38.628 "state": "completed", 00:13:38.628 "digest": "sha256", 00:13:38.628 "dhgroup": "ffdhe4096" 00:13:38.628 } 00:13:38.628 } 00:13:38.628 ]' 00:13:38.628 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.628 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:38.628 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.887 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:38.887 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.887 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.887 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.887 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.145 13:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:13:40.081 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.081 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:40.081 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.081 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.081 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.081 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.081 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:40.081 13:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.081 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.647 00:13:40.647 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.647 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:40.647 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.904 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.904 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.904 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.904 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.904 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.904 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:40.904 { 00:13:40.904 "cntlid": 27, 00:13:40.904 "qid": 0, 00:13:40.904 "state": "enabled", 00:13:40.904 "thread": "nvmf_tgt_poll_group_000", 00:13:40.904 "listen_address": { 00:13:40.904 "trtype": "TCP", 00:13:40.904 "adrfam": "IPv4", 00:13:40.904 "traddr": "10.0.0.2", 00:13:40.904 "trsvcid": "4420" 00:13:40.904 }, 00:13:40.904 "peer_address": { 00:13:40.904 "trtype": "TCP", 00:13:40.904 "adrfam": "IPv4", 00:13:40.904 "traddr": "10.0.0.1", 00:13:40.904 "trsvcid": "37304" 00:13:40.904 }, 00:13:40.904 "auth": { 00:13:40.904 "state": "completed", 00:13:40.904 "digest": "sha256", 00:13:40.904 "dhgroup": "ffdhe4096" 00:13:40.904 } 00:13:40.904 } 00:13:40.904 ]' 00:13:40.905 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:40.905 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:40.905 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:40.905 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:40.905 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:40.905 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.905 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.905 13:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.163 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:13:42.095 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.095 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:42.095 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.095 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.095 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.095 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.095 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:42.095 13:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.353 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.612 00:13:42.612 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:42.612 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:42.612 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.870 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.870 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.870 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:42.870 { 00:13:42.870 "cntlid": 29, 00:13:42.870 "qid": 0, 00:13:42.870 "state": "enabled", 00:13:42.870 "thread": "nvmf_tgt_poll_group_000", 00:13:42.870 "listen_address": { 00:13:42.870 "trtype": "TCP", 00:13:42.870 "adrfam": "IPv4", 00:13:42.870 "traddr": "10.0.0.2", 00:13:42.870 "trsvcid": "4420" 00:13:42.870 }, 00:13:42.870 "peer_address": { 00:13:42.870 "trtype": "TCP", 00:13:42.870 "adrfam": "IPv4", 00:13:42.870 "traddr": "10.0.0.1", 00:13:42.870 "trsvcid": "33954" 00:13:42.870 }, 00:13:42.870 "auth": { 00:13:42.870 "state": "completed", 00:13:42.870 "digest": "sha256", 00:13:42.870 "dhgroup": "ffdhe4096" 00:13:42.870 } 00:13:42.870 } 00:13:42.870 ]' 00:13:42.870 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:42.870 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:42.870 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.131 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:43.131 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.131 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.131 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.131 13:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.390 13:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:13:44.336 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.336 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.336 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.336 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.336 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.336 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.336 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:44.336 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.594 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.852 00:13:44.852 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:44.852 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.852 13:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.110 { 00:13:45.110 "cntlid": 31, 00:13:45.110 "qid": 0, 00:13:45.110 "state": "enabled", 00:13:45.110 "thread": "nvmf_tgt_poll_group_000", 00:13:45.110 "listen_address": { 00:13:45.110 "trtype": "TCP", 00:13:45.110 "adrfam": "IPv4", 00:13:45.110 "traddr": "10.0.0.2", 00:13:45.110 "trsvcid": "4420" 00:13:45.110 }, 00:13:45.110 "peer_address": { 00:13:45.110 "trtype": "TCP", 00:13:45.110 "adrfam": "IPv4", 00:13:45.110 "traddr": "10.0.0.1", 00:13:45.110 "trsvcid": "33972" 00:13:45.110 }, 00:13:45.110 "auth": { 00:13:45.110 "state": "completed", 00:13:45.110 "digest": "sha256", 00:13:45.110 "dhgroup": "ffdhe4096" 00:13:45.110 } 00:13:45.110 } 00:13:45.110 ]' 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.110 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.368 13:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:13:46.301 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.301 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:46.301 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.301 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.301 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.301 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.301 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.301 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:46.301 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.559 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.560 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.560 13:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.125 00:13:47.125 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.125 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.125 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.382 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.382 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.382 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.382 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.382 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.382 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.382 { 00:13:47.382 "cntlid": 33, 00:13:47.382 "qid": 0, 00:13:47.382 "state": "enabled", 00:13:47.382 "thread": "nvmf_tgt_poll_group_000", 00:13:47.382 "listen_address": { 00:13:47.382 "trtype": "TCP", 00:13:47.382 "adrfam": "IPv4", 00:13:47.382 "traddr": "10.0.0.2", 00:13:47.382 "trsvcid": "4420" 00:13:47.382 }, 00:13:47.382 "peer_address": { 00:13:47.382 "trtype": "TCP", 00:13:47.382 "adrfam": "IPv4", 00:13:47.382 "traddr": "10.0.0.1", 00:13:47.382 "trsvcid": "34008" 00:13:47.382 }, 00:13:47.382 "auth": { 00:13:47.382 "state": "completed", 00:13:47.382 "digest": "sha256", 00:13:47.382 "dhgroup": "ffdhe6144" 00:13:47.382 } 00:13:47.382 } 00:13:47.382 ]' 00:13:47.382 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.382 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.382 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.640 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:47.640 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.640 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.640 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.640 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.898 13:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.832 13:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.396 00:13:49.396 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.396 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.396 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.653 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.653 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.653 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.653 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.653 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.653 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.653 { 00:13:49.653 "cntlid": 35, 00:13:49.653 "qid": 0, 00:13:49.653 "state": "enabled", 00:13:49.653 "thread": "nvmf_tgt_poll_group_000", 00:13:49.653 "listen_address": { 00:13:49.653 "trtype": "TCP", 00:13:49.653 "adrfam": "IPv4", 00:13:49.653 "traddr": "10.0.0.2", 00:13:49.653 "trsvcid": "4420" 00:13:49.653 }, 00:13:49.653 "peer_address": { 00:13:49.653 "trtype": "TCP", 00:13:49.653 "adrfam": "IPv4", 00:13:49.653 "traddr": "10.0.0.1", 00:13:49.653 "trsvcid": "34042" 00:13:49.653 }, 00:13:49.653 "auth": { 00:13:49.653 "state": "completed", 00:13:49.653 "digest": "sha256", 00:13:49.653 "dhgroup": "ffdhe6144" 00:13:49.653 } 00:13:49.653 } 00:13:49.653 ]' 00:13:49.653 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.653 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.653 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.910 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:49.910 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.910 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.910 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.910 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.168 13:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:13:51.101 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.101 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:51.101 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.101 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.101 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.101 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.101 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:51.101 13:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:51.101 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:51.101 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.101 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:51.101 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:51.101 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:51.101 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.358 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.358 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.358 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.358 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.358 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.358 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.924 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.924 { 00:13:51.924 "cntlid": 37, 00:13:51.924 "qid": 0, 00:13:51.924 "state": "enabled", 00:13:51.924 "thread": "nvmf_tgt_poll_group_000", 00:13:51.924 "listen_address": { 00:13:51.924 "trtype": "TCP", 00:13:51.924 "adrfam": "IPv4", 00:13:51.924 "traddr": "10.0.0.2", 00:13:51.924 "trsvcid": "4420" 00:13:51.924 }, 00:13:51.924 "peer_address": { 00:13:51.924 "trtype": "TCP", 00:13:51.924 "adrfam": "IPv4", 00:13:51.924 "traddr": "10.0.0.1", 00:13:51.924 "trsvcid": "34070" 00:13:51.924 }, 00:13:51.924 "auth": { 00:13:51.924 "state": "completed", 00:13:51.924 "digest": "sha256", 00:13:51.924 "dhgroup": "ffdhe6144" 00:13:51.924 } 00:13:51.924 } 00:13:51.924 ]' 00:13:51.924 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.181 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.181 13:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.181 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:52.181 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.181 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.181 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.181 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.439 13:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:53.371 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:53.372 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.372 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:13:53.372 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.372 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.372 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.372 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:53.372 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:53.938 00:13:53.938 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.938 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.938 13:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.196 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.196 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.196 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.196 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.196 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.196 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:54.196 { 00:13:54.196 "cntlid": 39, 00:13:54.196 "qid": 0, 00:13:54.196 "state": "enabled", 00:13:54.196 "thread": "nvmf_tgt_poll_group_000", 00:13:54.196 "listen_address": { 00:13:54.196 "trtype": "TCP", 00:13:54.196 "adrfam": "IPv4", 00:13:54.196 "traddr": "10.0.0.2", 00:13:54.196 "trsvcid": "4420" 00:13:54.196 }, 00:13:54.196 "peer_address": { 00:13:54.196 "trtype": "TCP", 00:13:54.196 "adrfam": "IPv4", 00:13:54.196 "traddr": "10.0.0.1", 00:13:54.196 "trsvcid": "37564" 00:13:54.196 }, 00:13:54.196 "auth": { 00:13:54.196 "state": "completed", 00:13:54.196 "digest": "sha256", 00:13:54.196 "dhgroup": "ffdhe6144" 00:13:54.196 } 00:13:54.196 } 00:13:54.196 ]' 00:13:54.196 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:54.454 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.454 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:54.454 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:54.454 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:54.454 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.454 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.454 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.712 13:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:13:55.645 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.645 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:55.645 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.645 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.645 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.645 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.645 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.645 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:55.645 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.936 13:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.539 00:13:56.539 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.539 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.539 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.809 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.809 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.809 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.809 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.809 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.809 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.809 { 00:13:56.809 "cntlid": 41, 00:13:56.809 "qid": 0, 00:13:56.809 "state": "enabled", 00:13:56.809 "thread": "nvmf_tgt_poll_group_000", 00:13:56.809 "listen_address": { 00:13:56.809 "trtype": "TCP", 00:13:56.809 "adrfam": "IPv4", 00:13:56.809 "traddr": "10.0.0.2", 00:13:56.809 "trsvcid": "4420" 00:13:56.809 }, 00:13:56.809 "peer_address": { 00:13:56.809 "trtype": "TCP", 00:13:56.809 "adrfam": "IPv4", 00:13:56.809 "traddr": "10.0.0.1", 00:13:56.809 "trsvcid": "37592" 00:13:56.809 }, 00:13:56.809 "auth": { 00:13:56.809 "state": "completed", 00:13:56.809 "digest": "sha256", 00:13:56.809 "dhgroup": "ffdhe8192" 00:13:56.809 } 00:13:56.809 } 00:13:56.809 ]' 00:13:56.809 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.809 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.809 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.067 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:57.067 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.067 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.067 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.067 13:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.325 13:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:13:58.261 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.261 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.261 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.261 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.261 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.261 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.261 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:58.261 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.519 13:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.457 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.457 { 00:13:59.457 "cntlid": 43, 00:13:59.457 "qid": 0, 00:13:59.457 "state": "enabled", 00:13:59.457 "thread": "nvmf_tgt_poll_group_000", 00:13:59.457 "listen_address": { 00:13:59.457 "trtype": "TCP", 00:13:59.457 "adrfam": "IPv4", 00:13:59.457 "traddr": "10.0.0.2", 00:13:59.457 "trsvcid": "4420" 00:13:59.457 }, 00:13:59.457 "peer_address": { 00:13:59.457 "trtype": "TCP", 00:13:59.457 "adrfam": "IPv4", 00:13:59.457 "traddr": "10.0.0.1", 00:13:59.457 "trsvcid": "37614" 00:13:59.457 }, 00:13:59.457 "auth": { 00:13:59.457 "state": "completed", 00:13:59.457 "digest": "sha256", 00:13:59.457 "dhgroup": "ffdhe8192" 00:13:59.457 } 00:13:59.457 } 00:13:59.457 ]' 00:13:59.457 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.715 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.715 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.715 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:59.715 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.715 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.715 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.715 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.973 13:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:14:00.911 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.911 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.911 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.911 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.911 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.911 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.911 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:00.911 13:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.169 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.105 00:14:02.105 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.105 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.105 13:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.105 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.105 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.105 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.105 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.105 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.105 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.105 { 00:14:02.105 "cntlid": 45, 00:14:02.105 "qid": 0, 00:14:02.105 "state": "enabled", 00:14:02.105 "thread": "nvmf_tgt_poll_group_000", 00:14:02.105 "listen_address": { 00:14:02.105 "trtype": "TCP", 00:14:02.105 "adrfam": "IPv4", 00:14:02.105 "traddr": "10.0.0.2", 00:14:02.105 "trsvcid": "4420" 00:14:02.105 }, 00:14:02.105 "peer_address": { 00:14:02.105 "trtype": "TCP", 00:14:02.105 "adrfam": "IPv4", 00:14:02.105 "traddr": "10.0.0.1", 00:14:02.105 "trsvcid": "37646" 00:14:02.105 }, 00:14:02.105 "auth": { 00:14:02.105 "state": "completed", 00:14:02.105 "digest": "sha256", 00:14:02.105 "dhgroup": "ffdhe8192" 00:14:02.105 } 00:14:02.105 } 00:14:02.105 ]' 00:14:02.105 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.363 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.363 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.363 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:02.363 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.363 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.363 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.363 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.620 13:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:14:03.557 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.557 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:03.557 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.557 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.557 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.557 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.557 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:03.557 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.815 13:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:04.749 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.749 { 00:14:04.749 "cntlid": 47, 00:14:04.749 "qid": 0, 00:14:04.749 "state": "enabled", 00:14:04.749 "thread": "nvmf_tgt_poll_group_000", 00:14:04.749 "listen_address": { 00:14:04.749 "trtype": "TCP", 00:14:04.749 "adrfam": "IPv4", 00:14:04.749 "traddr": "10.0.0.2", 00:14:04.749 "trsvcid": "4420" 00:14:04.749 }, 00:14:04.749 "peer_address": { 00:14:04.749 "trtype": "TCP", 00:14:04.749 "adrfam": "IPv4", 00:14:04.749 "traddr": "10.0.0.1", 00:14:04.749 "trsvcid": "41624" 00:14:04.749 }, 00:14:04.749 "auth": { 00:14:04.749 "state": "completed", 00:14:04.749 "digest": "sha256", 00:14:04.749 "dhgroup": "ffdhe8192" 00:14:04.749 } 00:14:04.749 } 00:14:04.749 ]' 00:14:04.749 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.008 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.008 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.008 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:05.008 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.008 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.008 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.008 13:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.266 13:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:06.203 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.460 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.718 00:14:06.718 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.718 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.718 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.975 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.975 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.975 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.975 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.975 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.975 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.975 { 00:14:06.975 "cntlid": 49, 00:14:06.976 "qid": 0, 00:14:06.976 "state": "enabled", 00:14:06.976 "thread": "nvmf_tgt_poll_group_000", 00:14:06.976 "listen_address": { 00:14:06.976 "trtype": "TCP", 00:14:06.976 "adrfam": "IPv4", 00:14:06.976 "traddr": "10.0.0.2", 00:14:06.976 "trsvcid": "4420" 00:14:06.976 }, 00:14:06.976 "peer_address": { 00:14:06.976 "trtype": "TCP", 00:14:06.976 "adrfam": "IPv4", 00:14:06.976 "traddr": "10.0.0.1", 00:14:06.976 "trsvcid": "41652" 00:14:06.976 }, 00:14:06.976 "auth": { 00:14:06.976 "state": "completed", 00:14:06.976 "digest": "sha384", 00:14:06.976 "dhgroup": "null" 00:14:06.976 } 00:14:06.976 } 00:14:06.976 ]' 00:14:06.976 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.976 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.976 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.976 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:06.976 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.976 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.976 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.976 13:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.234 13:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:14:08.170 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.170 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:08.170 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.170 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.170 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.170 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.170 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:08.170 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.428 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.991 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:08.991 { 00:14:08.991 "cntlid": 51, 00:14:08.991 "qid": 0, 00:14:08.991 "state": "enabled", 00:14:08.991 "thread": "nvmf_tgt_poll_group_000", 00:14:08.991 "listen_address": { 00:14:08.991 "trtype": "TCP", 00:14:08.991 "adrfam": "IPv4", 00:14:08.991 "traddr": "10.0.0.2", 00:14:08.991 "trsvcid": "4420" 00:14:08.991 }, 00:14:08.991 "peer_address": { 00:14:08.991 "trtype": "TCP", 00:14:08.991 "adrfam": "IPv4", 00:14:08.991 "traddr": "10.0.0.1", 00:14:08.991 "trsvcid": "41676" 00:14:08.991 }, 00:14:08.991 "auth": { 00:14:08.991 "state": "completed", 00:14:08.991 "digest": "sha384", 00:14:08.991 "dhgroup": "null" 00:14:08.991 } 00:14:08.991 } 00:14:08.991 ]' 00:14:08.991 13:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.248 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.248 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.248 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:09.248 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.248 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.248 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.248 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.505 13:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:14:10.441 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.441 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:10.441 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.441 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.441 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.441 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.441 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:10.441 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.699 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.957 00:14:10.957 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.957 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.957 13:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.214 { 00:14:11.214 "cntlid": 53, 00:14:11.214 "qid": 0, 00:14:11.214 "state": "enabled", 00:14:11.214 "thread": "nvmf_tgt_poll_group_000", 00:14:11.214 "listen_address": { 00:14:11.214 "trtype": "TCP", 00:14:11.214 "adrfam": "IPv4", 00:14:11.214 "traddr": "10.0.0.2", 00:14:11.214 "trsvcid": "4420" 00:14:11.214 }, 00:14:11.214 "peer_address": { 00:14:11.214 "trtype": "TCP", 00:14:11.214 "adrfam": "IPv4", 00:14:11.214 "traddr": "10.0.0.1", 00:14:11.214 "trsvcid": "41702" 00:14:11.214 }, 00:14:11.214 "auth": { 00:14:11.214 "state": "completed", 00:14:11.214 "digest": "sha384", 00:14:11.214 "dhgroup": "null" 00:14:11.214 } 00:14:11.214 } 00:14:11.214 ]' 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:11.214 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.472 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.472 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.472 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.728 13:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:12.664 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.230 00:14:13.230 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.230 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.230 13:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.230 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.230 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.230 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.230 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.487 { 00:14:13.487 "cntlid": 55, 00:14:13.487 "qid": 0, 00:14:13.487 "state": "enabled", 00:14:13.487 "thread": "nvmf_tgt_poll_group_000", 00:14:13.487 "listen_address": { 00:14:13.487 "trtype": "TCP", 00:14:13.487 "adrfam": "IPv4", 00:14:13.487 "traddr": "10.0.0.2", 00:14:13.487 "trsvcid": "4420" 00:14:13.487 }, 00:14:13.487 "peer_address": { 00:14:13.487 "trtype": "TCP", 00:14:13.487 "adrfam": "IPv4", 00:14:13.487 "traddr": "10.0.0.1", 00:14:13.487 "trsvcid": "36724" 00:14:13.487 }, 00:14:13.487 "auth": { 00:14:13.487 "state": "completed", 00:14:13.487 "digest": "sha384", 00:14:13.487 "dhgroup": "null" 00:14:13.487 } 00:14:13.487 } 00:14:13.487 ]' 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.487 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.744 13:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:14:14.678 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.678 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.678 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.678 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.678 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.678 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.678 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.678 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:14.678 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.935 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.936 13:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.193 00:14:15.193 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.193 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.193 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.450 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.450 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.450 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.450 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.450 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.450 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.450 { 00:14:15.450 "cntlid": 57, 00:14:15.450 "qid": 0, 00:14:15.450 "state": "enabled", 00:14:15.450 "thread": "nvmf_tgt_poll_group_000", 00:14:15.450 "listen_address": { 00:14:15.450 "trtype": "TCP", 00:14:15.450 "adrfam": "IPv4", 00:14:15.450 "traddr": "10.0.0.2", 00:14:15.450 "trsvcid": "4420" 00:14:15.450 }, 00:14:15.450 "peer_address": { 00:14:15.450 "trtype": "TCP", 00:14:15.450 "adrfam": "IPv4", 00:14:15.450 "traddr": "10.0.0.1", 00:14:15.450 "trsvcid": "36756" 00:14:15.450 }, 00:14:15.450 "auth": { 00:14:15.450 "state": "completed", 00:14:15.450 "digest": "sha384", 00:14:15.451 "dhgroup": "ffdhe2048" 00:14:15.451 } 00:14:15.451 } 00:14:15.451 ]' 00:14:15.451 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.451 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.451 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.451 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.451 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.451 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.451 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.451 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.709 13:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:14:16.644 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.644 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.644 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.644 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.644 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.644 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:16.644 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:16.644 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.901 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.902 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.902 13:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.159 00:14:17.159 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.159 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.159 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.418 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.418 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.418 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.418 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.418 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.418 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.418 { 00:14:17.418 "cntlid": 59, 00:14:17.418 "qid": 0, 00:14:17.418 "state": "enabled", 00:14:17.418 "thread": "nvmf_tgt_poll_group_000", 00:14:17.418 "listen_address": { 00:14:17.418 "trtype": "TCP", 00:14:17.418 "adrfam": "IPv4", 00:14:17.418 "traddr": "10.0.0.2", 00:14:17.418 "trsvcid": "4420" 00:14:17.418 }, 00:14:17.418 "peer_address": { 00:14:17.418 "trtype": "TCP", 00:14:17.418 "adrfam": "IPv4", 00:14:17.418 "traddr": "10.0.0.1", 00:14:17.418 "trsvcid": "36780" 00:14:17.418 }, 00:14:17.418 "auth": { 00:14:17.418 "state": "completed", 00:14:17.418 "digest": "sha384", 00:14:17.418 "dhgroup": "ffdhe2048" 00:14:17.418 } 00:14:17.418 } 00:14:17.418 ]' 00:14:17.418 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.700 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.700 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.700 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:17.700 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.700 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.700 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.700 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.965 13:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.901 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.192 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.192 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.192 13:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.450 00:14:19.450 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:19.450 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:19.450 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:19.708 { 00:14:19.708 "cntlid": 61, 00:14:19.708 "qid": 0, 00:14:19.708 "state": "enabled", 00:14:19.708 "thread": "nvmf_tgt_poll_group_000", 00:14:19.708 "listen_address": { 00:14:19.708 "trtype": "TCP", 00:14:19.708 "adrfam": "IPv4", 00:14:19.708 "traddr": "10.0.0.2", 00:14:19.708 "trsvcid": "4420" 00:14:19.708 }, 00:14:19.708 "peer_address": { 00:14:19.708 "trtype": "TCP", 00:14:19.708 "adrfam": "IPv4", 00:14:19.708 "traddr": "10.0.0.1", 00:14:19.708 "trsvcid": "36812" 00:14:19.708 }, 00:14:19.708 "auth": { 00:14:19.708 "state": "completed", 00:14:19.708 "digest": "sha384", 00:14:19.708 "dhgroup": "ffdhe2048" 00:14:19.708 } 00:14:19.708 } 00:14:19.708 ]' 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.708 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.967 13:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:14:20.905 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.905 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.905 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.905 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.905 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.905 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.905 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:20.905 13:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.164 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.422 00:14:21.422 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:21.422 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:21.422 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.680 { 00:14:21.680 "cntlid": 63, 00:14:21.680 "qid": 0, 00:14:21.680 "state": "enabled", 00:14:21.680 "thread": "nvmf_tgt_poll_group_000", 00:14:21.680 "listen_address": { 00:14:21.680 "trtype": "TCP", 00:14:21.680 "adrfam": "IPv4", 00:14:21.680 "traddr": "10.0.0.2", 00:14:21.680 "trsvcid": "4420" 00:14:21.680 }, 00:14:21.680 "peer_address": { 00:14:21.680 "trtype": "TCP", 00:14:21.680 "adrfam": "IPv4", 00:14:21.680 "traddr": "10.0.0.1", 00:14:21.680 "trsvcid": "36842" 00:14:21.680 }, 00:14:21.680 "auth": { 00:14:21.680 "state": "completed", 00:14:21.680 "digest": "sha384", 00:14:21.680 "dhgroup": "ffdhe2048" 00:14:21.680 } 00:14:21.680 } 00:14:21.680 ]' 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:21.680 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.938 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.938 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.938 13:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.196 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:14:23.133 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.133 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.133 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.133 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.133 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.133 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.133 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.133 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:23.133 13:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.391 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.650 00:14:23.650 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.650 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.650 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.908 { 00:14:23.908 "cntlid": 65, 00:14:23.908 "qid": 0, 00:14:23.908 "state": "enabled", 00:14:23.908 "thread": "nvmf_tgt_poll_group_000", 00:14:23.908 "listen_address": { 00:14:23.908 "trtype": "TCP", 00:14:23.908 "adrfam": "IPv4", 00:14:23.908 "traddr": "10.0.0.2", 00:14:23.908 "trsvcid": "4420" 00:14:23.908 }, 00:14:23.908 "peer_address": { 00:14:23.908 "trtype": "TCP", 00:14:23.908 "adrfam": "IPv4", 00:14:23.908 "traddr": "10.0.0.1", 00:14:23.908 "trsvcid": "48026" 00:14:23.908 }, 00:14:23.908 "auth": { 00:14:23.908 "state": "completed", 00:14:23.908 "digest": "sha384", 00:14:23.908 "dhgroup": "ffdhe3072" 00:14:23.908 } 00:14:23.908 } 00:14:23.908 ]' 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.908 13:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.167 13:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:14:25.101 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.101 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.101 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.101 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.101 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.101 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.101 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:25.101 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.359 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.927 00:14:25.927 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.927 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.927 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.927 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.185 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.185 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.185 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.185 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.185 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.185 { 00:14:26.185 "cntlid": 67, 00:14:26.185 "qid": 0, 00:14:26.185 "state": "enabled", 00:14:26.185 "thread": "nvmf_tgt_poll_group_000", 00:14:26.185 "listen_address": { 00:14:26.185 "trtype": "TCP", 00:14:26.185 "adrfam": "IPv4", 00:14:26.185 "traddr": "10.0.0.2", 00:14:26.185 "trsvcid": "4420" 00:14:26.185 }, 00:14:26.185 "peer_address": { 00:14:26.185 "trtype": "TCP", 00:14:26.185 "adrfam": "IPv4", 00:14:26.185 "traddr": "10.0.0.1", 00:14:26.185 "trsvcid": "48048" 00:14:26.185 }, 00:14:26.185 "auth": { 00:14:26.185 "state": "completed", 00:14:26.185 "digest": "sha384", 00:14:26.185 "dhgroup": "ffdhe3072" 00:14:26.185 } 00:14:26.185 } 00:14:26.185 ]' 00:14:26.185 13:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.185 13:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.185 13:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.185 13:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:26.185 13:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.185 13:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.185 13:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.185 13:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.444 13:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:14:27.378 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.378 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.378 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.378 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.378 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.378 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.378 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:27.378 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.635 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.201 00:14:28.201 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.201 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.201 13:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.459 { 00:14:28.459 "cntlid": 69, 00:14:28.459 "qid": 0, 00:14:28.459 "state": "enabled", 00:14:28.459 "thread": "nvmf_tgt_poll_group_000", 00:14:28.459 "listen_address": { 00:14:28.459 "trtype": "TCP", 00:14:28.459 "adrfam": "IPv4", 00:14:28.459 "traddr": "10.0.0.2", 00:14:28.459 "trsvcid": "4420" 00:14:28.459 }, 00:14:28.459 "peer_address": { 00:14:28.459 "trtype": "TCP", 00:14:28.459 "adrfam": "IPv4", 00:14:28.459 "traddr": "10.0.0.1", 00:14:28.459 "trsvcid": "48072" 00:14:28.459 }, 00:14:28.459 "auth": { 00:14:28.459 "state": "completed", 00:14:28.459 "digest": "sha384", 00:14:28.459 "dhgroup": "ffdhe3072" 00:14:28.459 } 00:14:28.459 } 00:14:28.459 ]' 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.459 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.717 13:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:14:29.654 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.654 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.654 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.654 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.654 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.654 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.654 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:29.654 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.912 13:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.170 00:14:30.170 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.170 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.170 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.428 { 00:14:30.428 "cntlid": 71, 00:14:30.428 "qid": 0, 00:14:30.428 "state": "enabled", 00:14:30.428 "thread": "nvmf_tgt_poll_group_000", 00:14:30.428 "listen_address": { 00:14:30.428 "trtype": "TCP", 00:14:30.428 "adrfam": "IPv4", 00:14:30.428 "traddr": "10.0.0.2", 00:14:30.428 "trsvcid": "4420" 00:14:30.428 }, 00:14:30.428 "peer_address": { 00:14:30.428 "trtype": "TCP", 00:14:30.428 "adrfam": "IPv4", 00:14:30.428 "traddr": "10.0.0.1", 00:14:30.428 "trsvcid": "48090" 00:14:30.428 }, 00:14:30.428 "auth": { 00:14:30.428 "state": "completed", 00:14:30.428 "digest": "sha384", 00:14:30.428 "dhgroup": "ffdhe3072" 00:14:30.428 } 00:14:30.428 } 00:14:30.428 ]' 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.428 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.686 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.687 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.687 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.687 13:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:14:31.623 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.623 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.623 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.623 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.623 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.623 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.623 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.623 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:31.623 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.194 13:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.453 00:14:32.453 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.453 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.453 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.711 { 00:14:32.711 "cntlid": 73, 00:14:32.711 "qid": 0, 00:14:32.711 "state": "enabled", 00:14:32.711 "thread": "nvmf_tgt_poll_group_000", 00:14:32.711 "listen_address": { 00:14:32.711 "trtype": "TCP", 00:14:32.711 "adrfam": "IPv4", 00:14:32.711 "traddr": "10.0.0.2", 00:14:32.711 "trsvcid": "4420" 00:14:32.711 }, 00:14:32.711 "peer_address": { 00:14:32.711 "trtype": "TCP", 00:14:32.711 "adrfam": "IPv4", 00:14:32.711 "traddr": "10.0.0.1", 00:14:32.711 "trsvcid": "48118" 00:14:32.711 }, 00:14:32.711 "auth": { 00:14:32.711 "state": "completed", 00:14:32.711 "digest": "sha384", 00:14:32.711 "dhgroup": "ffdhe4096" 00:14:32.711 } 00:14:32.711 } 00:14:32.711 ]' 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.711 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.970 13:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:14:33.905 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.905 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.905 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.905 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.905 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.905 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.905 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:33.905 13:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.164 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.729 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.729 { 00:14:34.729 "cntlid": 75, 00:14:34.729 "qid": 0, 00:14:34.729 "state": "enabled", 00:14:34.729 "thread": "nvmf_tgt_poll_group_000", 00:14:34.729 "listen_address": { 00:14:34.729 "trtype": "TCP", 00:14:34.729 "adrfam": "IPv4", 00:14:34.729 "traddr": "10.0.0.2", 00:14:34.729 "trsvcid": "4420" 00:14:34.729 }, 00:14:34.729 "peer_address": { 00:14:34.729 "trtype": "TCP", 00:14:34.729 "adrfam": "IPv4", 00:14:34.729 "traddr": "10.0.0.1", 00:14:34.729 "trsvcid": "46458" 00:14:34.729 }, 00:14:34.729 "auth": { 00:14:34.729 "state": "completed", 00:14:34.729 "digest": "sha384", 00:14:34.729 "dhgroup": "ffdhe4096" 00:14:34.729 } 00:14:34.729 } 00:14:34.729 ]' 00:14:34.729 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.986 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.986 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.986 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:34.986 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.986 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.986 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.986 13:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.245 13:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:14:36.179 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.179 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.179 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.179 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.179 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.179 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.179 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:36.179 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.437 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.002 00:14:37.002 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.002 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.002 13:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.002 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.002 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.002 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.002 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.002 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.002 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.002 { 00:14:37.002 "cntlid": 77, 00:14:37.002 "qid": 0, 00:14:37.002 "state": "enabled", 00:14:37.002 "thread": "nvmf_tgt_poll_group_000", 00:14:37.002 "listen_address": { 00:14:37.002 "trtype": "TCP", 00:14:37.002 "adrfam": "IPv4", 00:14:37.002 "traddr": "10.0.0.2", 00:14:37.002 "trsvcid": "4420" 00:14:37.002 }, 00:14:37.002 "peer_address": { 00:14:37.002 "trtype": "TCP", 00:14:37.002 "adrfam": "IPv4", 00:14:37.002 "traddr": "10.0.0.1", 00:14:37.002 "trsvcid": "46482" 00:14:37.002 }, 00:14:37.002 "auth": { 00:14:37.002 "state": "completed", 00:14:37.002 "digest": "sha384", 00:14:37.002 "dhgroup": "ffdhe4096" 00:14:37.002 } 00:14:37.002 } 00:14:37.002 ]' 00:14:37.002 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.260 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.260 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.260 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:37.260 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.260 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.260 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.260 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.517 13:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:14:38.451 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.451 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.451 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.451 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.451 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.451 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.451 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:38.451 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.708 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.972 00:14:38.972 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.972 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.972 13:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.279 { 00:14:39.279 "cntlid": 79, 00:14:39.279 "qid": 0, 00:14:39.279 "state": "enabled", 00:14:39.279 "thread": "nvmf_tgt_poll_group_000", 00:14:39.279 "listen_address": { 00:14:39.279 "trtype": "TCP", 00:14:39.279 "adrfam": "IPv4", 00:14:39.279 "traddr": "10.0.0.2", 00:14:39.279 "trsvcid": "4420" 00:14:39.279 }, 00:14:39.279 "peer_address": { 00:14:39.279 "trtype": "TCP", 00:14:39.279 "adrfam": "IPv4", 00:14:39.279 "traddr": "10.0.0.1", 00:14:39.279 "trsvcid": "46514" 00:14:39.279 }, 00:14:39.279 "auth": { 00:14:39.279 "state": "completed", 00:14:39.279 "digest": "sha384", 00:14:39.279 "dhgroup": "ffdhe4096" 00:14:39.279 } 00:14:39.279 } 00:14:39.279 ]' 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.279 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.539 13:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:14:40.473 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.473 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.473 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.473 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.473 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.473 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.473 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.473 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:40.473 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.731 13:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.296 00:14:41.296 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.296 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.296 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.554 { 00:14:41.554 "cntlid": 81, 00:14:41.554 "qid": 0, 00:14:41.554 "state": "enabled", 00:14:41.554 "thread": "nvmf_tgt_poll_group_000", 00:14:41.554 "listen_address": { 00:14:41.554 "trtype": "TCP", 00:14:41.554 "adrfam": "IPv4", 00:14:41.554 "traddr": "10.0.0.2", 00:14:41.554 "trsvcid": "4420" 00:14:41.554 }, 00:14:41.554 "peer_address": { 00:14:41.554 "trtype": "TCP", 00:14:41.554 "adrfam": "IPv4", 00:14:41.554 "traddr": "10.0.0.1", 00:14:41.554 "trsvcid": "46530" 00:14:41.554 }, 00:14:41.554 "auth": { 00:14:41.554 "state": "completed", 00:14:41.554 "digest": "sha384", 00:14:41.554 "dhgroup": "ffdhe6144" 00:14:41.554 } 00:14:41.554 } 00:14:41.554 ]' 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.554 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.812 13:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:14:42.746 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.746 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.746 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.746 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.746 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.746 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.746 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:42.746 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:43.004 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:14:43.004 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.004 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:43.004 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:43.004 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:43.004 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.004 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.005 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.005 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.005 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.005 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.005 13:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.570 00:14:43.570 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.570 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.570 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.828 { 00:14:43.828 "cntlid": 83, 00:14:43.828 "qid": 0, 00:14:43.828 "state": "enabled", 00:14:43.828 "thread": "nvmf_tgt_poll_group_000", 00:14:43.828 "listen_address": { 00:14:43.828 "trtype": "TCP", 00:14:43.828 "adrfam": "IPv4", 00:14:43.828 "traddr": "10.0.0.2", 00:14:43.828 "trsvcid": "4420" 00:14:43.828 }, 00:14:43.828 "peer_address": { 00:14:43.828 "trtype": "TCP", 00:14:43.828 "adrfam": "IPv4", 00:14:43.828 "traddr": "10.0.0.1", 00:14:43.828 "trsvcid": "34352" 00:14:43.828 }, 00:14:43.828 "auth": { 00:14:43.828 "state": "completed", 00:14:43.828 "digest": "sha384", 00:14:43.828 "dhgroup": "ffdhe6144" 00:14:43.828 } 00:14:43.828 } 00:14:43.828 ]' 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:43.828 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.086 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.086 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.086 13:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.086 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:14:45.019 13:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.019 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.019 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.019 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.019 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.019 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.019 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:45.019 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.278 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.279 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.844 00:14:45.844 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.844 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.844 13:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.102 { 00:14:46.102 "cntlid": 85, 00:14:46.102 "qid": 0, 00:14:46.102 "state": "enabled", 00:14:46.102 "thread": "nvmf_tgt_poll_group_000", 00:14:46.102 "listen_address": { 00:14:46.102 "trtype": "TCP", 00:14:46.102 "adrfam": "IPv4", 00:14:46.102 "traddr": "10.0.0.2", 00:14:46.102 "trsvcid": "4420" 00:14:46.102 }, 00:14:46.102 "peer_address": { 00:14:46.102 "trtype": "TCP", 00:14:46.102 "adrfam": "IPv4", 00:14:46.102 "traddr": "10.0.0.1", 00:14:46.102 "trsvcid": "34384" 00:14:46.102 }, 00:14:46.102 "auth": { 00:14:46.102 "state": "completed", 00:14:46.102 "digest": "sha384", 00:14:46.102 "dhgroup": "ffdhe6144" 00:14:46.102 } 00:14:46.102 } 00:14:46.102 ]' 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:46.102 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.360 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.360 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.360 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.616 13:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.549 13:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.114 00:14:48.114 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.115 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.115 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.372 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.372 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.372 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.372 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.372 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.372 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.372 { 00:14:48.372 "cntlid": 87, 00:14:48.372 "qid": 0, 00:14:48.372 "state": "enabled", 00:14:48.372 "thread": "nvmf_tgt_poll_group_000", 00:14:48.372 "listen_address": { 00:14:48.372 "trtype": "TCP", 00:14:48.372 "adrfam": "IPv4", 00:14:48.373 "traddr": "10.0.0.2", 00:14:48.373 "trsvcid": "4420" 00:14:48.373 }, 00:14:48.373 "peer_address": { 00:14:48.373 "trtype": "TCP", 00:14:48.373 "adrfam": "IPv4", 00:14:48.373 "traddr": "10.0.0.1", 00:14:48.373 "trsvcid": "34408" 00:14:48.373 }, 00:14:48.373 "auth": { 00:14:48.373 "state": "completed", 00:14:48.373 "digest": "sha384", 00:14:48.373 "dhgroup": "ffdhe6144" 00:14:48.373 } 00:14:48.373 } 00:14:48.373 ]' 00:14:48.373 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.373 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.373 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.373 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:48.373 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.677 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.677 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.677 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.677 13:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:14:49.609 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.609 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.609 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.609 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.609 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.609 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:49.609 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.609 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:49.609 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.867 13:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.801 00:14:50.801 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.801 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.801 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.059 { 00:14:51.059 "cntlid": 89, 00:14:51.059 "qid": 0, 00:14:51.059 "state": "enabled", 00:14:51.059 "thread": "nvmf_tgt_poll_group_000", 00:14:51.059 "listen_address": { 00:14:51.059 "trtype": "TCP", 00:14:51.059 "adrfam": "IPv4", 00:14:51.059 "traddr": "10.0.0.2", 00:14:51.059 "trsvcid": "4420" 00:14:51.059 }, 00:14:51.059 "peer_address": { 00:14:51.059 "trtype": "TCP", 00:14:51.059 "adrfam": "IPv4", 00:14:51.059 "traddr": "10.0.0.1", 00:14:51.059 "trsvcid": "34434" 00:14:51.059 }, 00:14:51.059 "auth": { 00:14:51.059 "state": "completed", 00:14:51.059 "digest": "sha384", 00:14:51.059 "dhgroup": "ffdhe8192" 00:14:51.059 } 00:14:51.059 } 00:14:51.059 ]' 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.059 13:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.059 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.059 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.059 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.317 13:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:14:52.249 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.249 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.249 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.249 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.249 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.249 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.249 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:52.249 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.505 13:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.435 00:14:53.435 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.435 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.435 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.693 { 00:14:53.693 "cntlid": 91, 00:14:53.693 "qid": 0, 00:14:53.693 "state": "enabled", 00:14:53.693 "thread": "nvmf_tgt_poll_group_000", 00:14:53.693 "listen_address": { 00:14:53.693 "trtype": "TCP", 00:14:53.693 "adrfam": "IPv4", 00:14:53.693 "traddr": "10.0.0.2", 00:14:53.693 "trsvcid": "4420" 00:14:53.693 }, 00:14:53.693 "peer_address": { 00:14:53.693 "trtype": "TCP", 00:14:53.693 "adrfam": "IPv4", 00:14:53.693 "traddr": "10.0.0.1", 00:14:53.693 "trsvcid": "49874" 00:14:53.693 }, 00:14:53.693 "auth": { 00:14:53.693 "state": "completed", 00:14:53.693 "digest": "sha384", 00:14:53.693 "dhgroup": "ffdhe8192" 00:14:53.693 } 00:14:53.693 } 00:14:53.693 ]' 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.693 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.950 13:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:14:54.882 13:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.882 13:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.882 13:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.882 13:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.882 13:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.882 13:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.882 13:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:54.882 13:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.139 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.071 00:14:56.071 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.071 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.071 13:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.328 { 00:14:56.328 "cntlid": 93, 00:14:56.328 "qid": 0, 00:14:56.328 "state": "enabled", 00:14:56.328 "thread": "nvmf_tgt_poll_group_000", 00:14:56.328 "listen_address": { 00:14:56.328 "trtype": "TCP", 00:14:56.328 "adrfam": "IPv4", 00:14:56.328 "traddr": "10.0.0.2", 00:14:56.328 "trsvcid": "4420" 00:14:56.328 }, 00:14:56.328 "peer_address": { 00:14:56.328 "trtype": "TCP", 00:14:56.328 "adrfam": "IPv4", 00:14:56.328 "traddr": "10.0.0.1", 00:14:56.328 "trsvcid": "49918" 00:14:56.328 }, 00:14:56.328 "auth": { 00:14:56.328 "state": "completed", 00:14:56.328 "digest": "sha384", 00:14:56.328 "dhgroup": "ffdhe8192" 00:14:56.328 } 00:14:56.328 } 00:14:56.328 ]' 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.328 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.585 13:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:14:57.516 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.516 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.516 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.516 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.516 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.516 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.516 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:57.516 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.774 13:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:58.706 00:14:58.706 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.706 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.706 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.964 { 00:14:58.964 "cntlid": 95, 00:14:58.964 "qid": 0, 00:14:58.964 "state": "enabled", 00:14:58.964 "thread": "nvmf_tgt_poll_group_000", 00:14:58.964 "listen_address": { 00:14:58.964 "trtype": "TCP", 00:14:58.964 "adrfam": "IPv4", 00:14:58.964 "traddr": "10.0.0.2", 00:14:58.964 "trsvcid": "4420" 00:14:58.964 }, 00:14:58.964 "peer_address": { 00:14:58.964 "trtype": "TCP", 00:14:58.964 "adrfam": "IPv4", 00:14:58.964 "traddr": "10.0.0.1", 00:14:58.964 "trsvcid": "49950" 00:14:58.964 }, 00:14:58.964 "auth": { 00:14:58.964 "state": "completed", 00:14:58.964 "digest": "sha384", 00:14:58.964 "dhgroup": "ffdhe8192" 00:14:58.964 } 00:14:58.964 } 00:14:58.964 ]' 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.964 13:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.222 13:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:00.156 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.413 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.414 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.980 00:15:00.980 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.980 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.980 13:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.268 { 00:15:01.268 "cntlid": 97, 00:15:01.268 "qid": 0, 00:15:01.268 "state": "enabled", 00:15:01.268 "thread": "nvmf_tgt_poll_group_000", 00:15:01.268 "listen_address": { 00:15:01.268 "trtype": "TCP", 00:15:01.268 "adrfam": "IPv4", 00:15:01.268 "traddr": "10.0.0.2", 00:15:01.268 "trsvcid": "4420" 00:15:01.268 }, 00:15:01.268 "peer_address": { 00:15:01.268 "trtype": "TCP", 00:15:01.268 "adrfam": "IPv4", 00:15:01.268 "traddr": "10.0.0.1", 00:15:01.268 "trsvcid": "49988" 00:15:01.268 }, 00:15:01.268 "auth": { 00:15:01.268 "state": "completed", 00:15:01.268 "digest": "sha512", 00:15:01.268 "dhgroup": "null" 00:15:01.268 } 00:15:01.268 } 00:15:01.268 ]' 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.268 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.528 13:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:15:02.460 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.460 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.460 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.460 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.460 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.460 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.460 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.460 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.718 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.976 00:15:02.976 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.976 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.976 13:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.234 { 00:15:03.234 "cntlid": 99, 00:15:03.234 "qid": 0, 00:15:03.234 "state": "enabled", 00:15:03.234 "thread": "nvmf_tgt_poll_group_000", 00:15:03.234 "listen_address": { 00:15:03.234 "trtype": "TCP", 00:15:03.234 "adrfam": "IPv4", 00:15:03.234 "traddr": "10.0.0.2", 00:15:03.234 "trsvcid": "4420" 00:15:03.234 }, 00:15:03.234 "peer_address": { 00:15:03.234 "trtype": "TCP", 00:15:03.234 "adrfam": "IPv4", 00:15:03.234 "traddr": "10.0.0.1", 00:15:03.234 "trsvcid": "60328" 00:15:03.234 }, 00:15:03.234 "auth": { 00:15:03.234 "state": "completed", 00:15:03.234 "digest": "sha512", 00:15:03.234 "dhgroup": "null" 00:15:03.234 } 00:15:03.234 } 00:15:03.234 ]' 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.234 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.492 13:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:15:04.425 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.425 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:04.425 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.425 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.425 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.425 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.425 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:04.425 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.682 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.940 00:15:04.940 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.940 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.940 13:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.197 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.197 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.197 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.197 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.197 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.197 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.197 { 00:15:05.197 "cntlid": 101, 00:15:05.197 "qid": 0, 00:15:05.197 "state": "enabled", 00:15:05.197 "thread": "nvmf_tgt_poll_group_000", 00:15:05.197 "listen_address": { 00:15:05.197 "trtype": "TCP", 00:15:05.197 "adrfam": "IPv4", 00:15:05.197 "traddr": "10.0.0.2", 00:15:05.197 "trsvcid": "4420" 00:15:05.197 }, 00:15:05.197 "peer_address": { 00:15:05.197 "trtype": "TCP", 00:15:05.197 "adrfam": "IPv4", 00:15:05.197 "traddr": "10.0.0.1", 00:15:05.197 "trsvcid": "60364" 00:15:05.197 }, 00:15:05.197 "auth": { 00:15:05.197 "state": "completed", 00:15:05.197 "digest": "sha512", 00:15:05.197 "dhgroup": "null" 00:15:05.197 } 00:15:05.197 } 00:15:05.197 ]' 00:15:05.197 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.455 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.455 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.455 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:05.455 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.455 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.455 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.455 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.713 13:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:15:06.644 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.644 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.644 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.644 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.644 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.644 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.644 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:06.644 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.901 13:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.159 00:15:07.159 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.159 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.159 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.417 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.417 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.417 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.417 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.417 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.418 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.418 { 00:15:07.418 "cntlid": 103, 00:15:07.418 "qid": 0, 00:15:07.418 "state": "enabled", 00:15:07.418 "thread": "nvmf_tgt_poll_group_000", 00:15:07.418 "listen_address": { 00:15:07.418 "trtype": "TCP", 00:15:07.418 "adrfam": "IPv4", 00:15:07.418 "traddr": "10.0.0.2", 00:15:07.418 "trsvcid": "4420" 00:15:07.418 }, 00:15:07.418 "peer_address": { 00:15:07.418 "trtype": "TCP", 00:15:07.418 "adrfam": "IPv4", 00:15:07.418 "traddr": "10.0.0.1", 00:15:07.418 "trsvcid": "60396" 00:15:07.418 }, 00:15:07.418 "auth": { 00:15:07.418 "state": "completed", 00:15:07.418 "digest": "sha512", 00:15:07.418 "dhgroup": "null" 00:15:07.418 } 00:15:07.418 } 00:15:07.418 ]' 00:15:07.418 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.418 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.418 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.675 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:07.675 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.675 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.675 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.675 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.933 13:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.866 13:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.431 00:15:09.431 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.431 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.431 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.431 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.688 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.689 { 00:15:09.689 "cntlid": 105, 00:15:09.689 "qid": 0, 00:15:09.689 "state": "enabled", 00:15:09.689 "thread": "nvmf_tgt_poll_group_000", 00:15:09.689 "listen_address": { 00:15:09.689 "trtype": "TCP", 00:15:09.689 "adrfam": "IPv4", 00:15:09.689 "traddr": "10.0.0.2", 00:15:09.689 "trsvcid": "4420" 00:15:09.689 }, 00:15:09.689 "peer_address": { 00:15:09.689 "trtype": "TCP", 00:15:09.689 "adrfam": "IPv4", 00:15:09.689 "traddr": "10.0.0.1", 00:15:09.689 "trsvcid": "60418" 00:15:09.689 }, 00:15:09.689 "auth": { 00:15:09.689 "state": "completed", 00:15:09.689 "digest": "sha512", 00:15:09.689 "dhgroup": "ffdhe2048" 00:15:09.689 } 00:15:09.689 } 00:15:09.689 ]' 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.689 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.947 13:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:15:10.881 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.881 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.881 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.881 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.881 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.881 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.881 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:10.881 13:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.139 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.397 00:15:11.397 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.397 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.397 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.656 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.656 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.656 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.656 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.656 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.656 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.656 { 00:15:11.656 "cntlid": 107, 00:15:11.656 "qid": 0, 00:15:11.656 "state": "enabled", 00:15:11.656 "thread": "nvmf_tgt_poll_group_000", 00:15:11.656 "listen_address": { 00:15:11.656 "trtype": "TCP", 00:15:11.656 "adrfam": "IPv4", 00:15:11.656 "traddr": "10.0.0.2", 00:15:11.656 "trsvcid": "4420" 00:15:11.656 }, 00:15:11.656 "peer_address": { 00:15:11.656 "trtype": "TCP", 00:15:11.656 "adrfam": "IPv4", 00:15:11.656 "traddr": "10.0.0.1", 00:15:11.656 "trsvcid": "60450" 00:15:11.656 }, 00:15:11.656 "auth": { 00:15:11.656 "state": "completed", 00:15:11.656 "digest": "sha512", 00:15:11.656 "dhgroup": "ffdhe2048" 00:15:11.656 } 00:15:11.656 } 00:15:11.656 ]' 00:15:11.656 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.914 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.914 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.914 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.914 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.914 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.914 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.914 13:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.172 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:15:13.105 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.105 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.105 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.105 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.105 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.105 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.105 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:13.105 13:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.364 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.622 00:15:13.622 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.622 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.622 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.880 { 00:15:13.880 "cntlid": 109, 00:15:13.880 "qid": 0, 00:15:13.880 "state": "enabled", 00:15:13.880 "thread": "nvmf_tgt_poll_group_000", 00:15:13.880 "listen_address": { 00:15:13.880 "trtype": "TCP", 00:15:13.880 "adrfam": "IPv4", 00:15:13.880 "traddr": "10.0.0.2", 00:15:13.880 "trsvcid": "4420" 00:15:13.880 }, 00:15:13.880 "peer_address": { 00:15:13.880 "trtype": "TCP", 00:15:13.880 "adrfam": "IPv4", 00:15:13.880 "traddr": "10.0.0.1", 00:15:13.880 "trsvcid": "38574" 00:15:13.880 }, 00:15:13.880 "auth": { 00:15:13.880 "state": "completed", 00:15:13.880 "digest": "sha512", 00:15:13.880 "dhgroup": "ffdhe2048" 00:15:13.880 } 00:15:13.880 } 00:15:13.880 ]' 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.880 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.138 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.138 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.138 13:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.138 13:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:15:15.070 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.070 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.070 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.070 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.070 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.070 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.070 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:15.070 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.328 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.329 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.586 00:15:15.844 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.844 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.844 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.844 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.844 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.844 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.844 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.103 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.103 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.103 { 00:15:16.103 "cntlid": 111, 00:15:16.103 "qid": 0, 00:15:16.103 "state": "enabled", 00:15:16.103 "thread": "nvmf_tgt_poll_group_000", 00:15:16.103 "listen_address": { 00:15:16.103 "trtype": "TCP", 00:15:16.103 "adrfam": "IPv4", 00:15:16.103 "traddr": "10.0.0.2", 00:15:16.103 "trsvcid": "4420" 00:15:16.103 }, 00:15:16.103 "peer_address": { 00:15:16.103 "trtype": "TCP", 00:15:16.103 "adrfam": "IPv4", 00:15:16.103 "traddr": "10.0.0.1", 00:15:16.103 "trsvcid": "38590" 00:15:16.103 }, 00:15:16.103 "auth": { 00:15:16.103 "state": "completed", 00:15:16.103 "digest": "sha512", 00:15:16.103 "dhgroup": "ffdhe2048" 00:15:16.103 } 00:15:16.103 } 00:15:16.103 ]' 00:15:16.103 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.103 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.103 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.103 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.103 13:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.103 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.103 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.103 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.361 13:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:15:17.294 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.294 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.294 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.294 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.294 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.294 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.294 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.294 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:17.294 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.552 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.809 00:15:17.809 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.809 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.809 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.066 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.066 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.067 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.067 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.067 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.067 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.067 { 00:15:18.067 "cntlid": 113, 00:15:18.067 "qid": 0, 00:15:18.067 "state": "enabled", 00:15:18.067 "thread": "nvmf_tgt_poll_group_000", 00:15:18.067 "listen_address": { 00:15:18.067 "trtype": "TCP", 00:15:18.067 "adrfam": "IPv4", 00:15:18.067 "traddr": "10.0.0.2", 00:15:18.067 "trsvcid": "4420" 00:15:18.067 }, 00:15:18.067 "peer_address": { 00:15:18.067 "trtype": "TCP", 00:15:18.067 "adrfam": "IPv4", 00:15:18.067 "traddr": "10.0.0.1", 00:15:18.067 "trsvcid": "38616" 00:15:18.067 }, 00:15:18.067 "auth": { 00:15:18.067 "state": "completed", 00:15:18.067 "digest": "sha512", 00:15:18.067 "dhgroup": "ffdhe3072" 00:15:18.067 } 00:15:18.067 } 00:15:18.067 ]' 00:15:18.067 13:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.067 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.067 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.067 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.067 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.324 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.324 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.324 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.582 13:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.516 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.774 00:15:19.774 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.774 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.774 13:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.032 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.032 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.032 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.032 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.032 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.032 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.032 { 00:15:20.032 "cntlid": 115, 00:15:20.032 "qid": 0, 00:15:20.032 "state": "enabled", 00:15:20.032 "thread": "nvmf_tgt_poll_group_000", 00:15:20.032 "listen_address": { 00:15:20.032 "trtype": "TCP", 00:15:20.032 "adrfam": "IPv4", 00:15:20.032 "traddr": "10.0.0.2", 00:15:20.032 "trsvcid": "4420" 00:15:20.032 }, 00:15:20.032 "peer_address": { 00:15:20.032 "trtype": "TCP", 00:15:20.032 "adrfam": "IPv4", 00:15:20.032 "traddr": "10.0.0.1", 00:15:20.032 "trsvcid": "38650" 00:15:20.032 }, 00:15:20.032 "auth": { 00:15:20.032 "state": "completed", 00:15:20.032 "digest": "sha512", 00:15:20.032 "dhgroup": "ffdhe3072" 00:15:20.032 } 00:15:20.032 } 00:15:20.032 ]' 00:15:20.032 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.290 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.290 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.290 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.290 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.290 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.290 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.290 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.547 13:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:15:21.481 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.481 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.481 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.481 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.481 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.481 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.481 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:21.481 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.739 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.996 00:15:21.996 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.996 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.996 13:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.287 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.287 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.287 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.287 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.287 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.287 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.287 { 00:15:22.287 "cntlid": 117, 00:15:22.287 "qid": 0, 00:15:22.287 "state": "enabled", 00:15:22.287 "thread": "nvmf_tgt_poll_group_000", 00:15:22.287 "listen_address": { 00:15:22.287 "trtype": "TCP", 00:15:22.287 "adrfam": "IPv4", 00:15:22.287 "traddr": "10.0.0.2", 00:15:22.287 "trsvcid": "4420" 00:15:22.287 }, 00:15:22.288 "peer_address": { 00:15:22.288 "trtype": "TCP", 00:15:22.288 "adrfam": "IPv4", 00:15:22.288 "traddr": "10.0.0.1", 00:15:22.288 "trsvcid": "38670" 00:15:22.288 }, 00:15:22.288 "auth": { 00:15:22.288 "state": "completed", 00:15:22.288 "digest": "sha512", 00:15:22.288 "dhgroup": "ffdhe3072" 00:15:22.288 } 00:15:22.288 } 00:15:22.288 ]' 00:15:22.288 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.288 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.288 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.288 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.288 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.288 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.288 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.288 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.547 13:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:15:23.481 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.481 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.481 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.481 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.481 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.481 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.481 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:23.481 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.739 13:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.995 00:15:23.995 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.995 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.995 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.252 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.252 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.252 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.252 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.252 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.252 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.252 { 00:15:24.252 "cntlid": 119, 00:15:24.252 "qid": 0, 00:15:24.252 "state": "enabled", 00:15:24.252 "thread": "nvmf_tgt_poll_group_000", 00:15:24.252 "listen_address": { 00:15:24.252 "trtype": "TCP", 00:15:24.252 "adrfam": "IPv4", 00:15:24.252 "traddr": "10.0.0.2", 00:15:24.252 "trsvcid": "4420" 00:15:24.252 }, 00:15:24.252 "peer_address": { 00:15:24.252 "trtype": "TCP", 00:15:24.252 "adrfam": "IPv4", 00:15:24.252 "traddr": "10.0.0.1", 00:15:24.252 "trsvcid": "57392" 00:15:24.252 }, 00:15:24.252 "auth": { 00:15:24.252 "state": "completed", 00:15:24.252 "digest": "sha512", 00:15:24.252 "dhgroup": "ffdhe3072" 00:15:24.252 } 00:15:24.252 } 00:15:24.252 ]' 00:15:24.252 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.509 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.509 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.509 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.509 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.509 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.509 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.509 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.767 13:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.700 13:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.266 00:15:26.266 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.266 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.266 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.528 { 00:15:26.528 "cntlid": 121, 00:15:26.528 "qid": 0, 00:15:26.528 "state": "enabled", 00:15:26.528 "thread": "nvmf_tgt_poll_group_000", 00:15:26.528 "listen_address": { 00:15:26.528 "trtype": "TCP", 00:15:26.528 "adrfam": "IPv4", 00:15:26.528 "traddr": "10.0.0.2", 00:15:26.528 "trsvcid": "4420" 00:15:26.528 }, 00:15:26.528 "peer_address": { 00:15:26.528 "trtype": "TCP", 00:15:26.528 "adrfam": "IPv4", 00:15:26.528 "traddr": "10.0.0.1", 00:15:26.528 "trsvcid": "57432" 00:15:26.528 }, 00:15:26.528 "auth": { 00:15:26.528 "state": "completed", 00:15:26.528 "digest": "sha512", 00:15:26.528 "dhgroup": "ffdhe4096" 00:15:26.528 } 00:15:26.528 } 00:15:26.528 ]' 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.528 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.096 13:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:15:28.030 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.030 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.030 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.030 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.030 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.030 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.030 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.031 13:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.596 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.596 { 00:15:28.596 "cntlid": 123, 00:15:28.596 "qid": 0, 00:15:28.596 "state": "enabled", 00:15:28.596 "thread": "nvmf_tgt_poll_group_000", 00:15:28.596 "listen_address": { 00:15:28.596 "trtype": "TCP", 00:15:28.596 "adrfam": "IPv4", 00:15:28.596 "traddr": "10.0.0.2", 00:15:28.596 "trsvcid": "4420" 00:15:28.596 }, 00:15:28.596 "peer_address": { 00:15:28.596 "trtype": "TCP", 00:15:28.596 "adrfam": "IPv4", 00:15:28.596 "traddr": "10.0.0.1", 00:15:28.596 "trsvcid": "57454" 00:15:28.596 }, 00:15:28.596 "auth": { 00:15:28.596 "state": "completed", 00:15:28.596 "digest": "sha512", 00:15:28.596 "dhgroup": "ffdhe4096" 00:15:28.596 } 00:15:28.596 } 00:15:28.596 ]' 00:15:28.596 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.854 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.854 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.854 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.854 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.854 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.854 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.854 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.112 13:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:15:30.045 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.045 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.045 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.045 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.045 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.045 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.046 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.046 13:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.303 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.561 00:15:30.818 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.818 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.818 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.076 { 00:15:31.076 "cntlid": 125, 00:15:31.076 "qid": 0, 00:15:31.076 "state": "enabled", 00:15:31.076 "thread": "nvmf_tgt_poll_group_000", 00:15:31.076 "listen_address": { 00:15:31.076 "trtype": "TCP", 00:15:31.076 "adrfam": "IPv4", 00:15:31.076 "traddr": "10.0.0.2", 00:15:31.076 "trsvcid": "4420" 00:15:31.076 }, 00:15:31.076 "peer_address": { 00:15:31.076 "trtype": "TCP", 00:15:31.076 "adrfam": "IPv4", 00:15:31.076 "traddr": "10.0.0.1", 00:15:31.076 "trsvcid": "57484" 00:15:31.076 }, 00:15:31.076 "auth": { 00:15:31.076 "state": "completed", 00:15:31.076 "digest": "sha512", 00:15:31.076 "dhgroup": "ffdhe4096" 00:15:31.076 } 00:15:31.076 } 00:15:31.076 ]' 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.076 13:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.334 13:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:15:32.268 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.268 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.268 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.268 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.268 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.268 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.268 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:32.268 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.526 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.783 00:15:32.783 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.783 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.783 13:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.042 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.042 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.042 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.042 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.042 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.042 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.042 { 00:15:33.042 "cntlid": 127, 00:15:33.042 "qid": 0, 00:15:33.042 "state": "enabled", 00:15:33.042 "thread": "nvmf_tgt_poll_group_000", 00:15:33.042 "listen_address": { 00:15:33.042 "trtype": "TCP", 00:15:33.042 "adrfam": "IPv4", 00:15:33.042 "traddr": "10.0.0.2", 00:15:33.042 "trsvcid": "4420" 00:15:33.042 }, 00:15:33.042 "peer_address": { 00:15:33.042 "trtype": "TCP", 00:15:33.042 "adrfam": "IPv4", 00:15:33.042 "traddr": "10.0.0.1", 00:15:33.042 "trsvcid": "40178" 00:15:33.042 }, 00:15:33.042 "auth": { 00:15:33.042 "state": "completed", 00:15:33.042 "digest": "sha512", 00:15:33.042 "dhgroup": "ffdhe4096" 00:15:33.042 } 00:15:33.042 } 00:15:33.042 ]' 00:15:33.042 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.042 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:33.300 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.300 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.300 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.300 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.300 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.300 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.558 13:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:15:34.488 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.488 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.488 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.488 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.488 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.488 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.488 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.488 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:34.488 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.745 13:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.309 00:15:35.309 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.309 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.309 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.309 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.309 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.309 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.309 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.566 { 00:15:35.566 "cntlid": 129, 00:15:35.566 "qid": 0, 00:15:35.566 "state": "enabled", 00:15:35.566 "thread": "nvmf_tgt_poll_group_000", 00:15:35.566 "listen_address": { 00:15:35.566 "trtype": "TCP", 00:15:35.566 "adrfam": "IPv4", 00:15:35.566 "traddr": "10.0.0.2", 00:15:35.566 "trsvcid": "4420" 00:15:35.566 }, 00:15:35.566 "peer_address": { 00:15:35.566 "trtype": "TCP", 00:15:35.566 "adrfam": "IPv4", 00:15:35.566 "traddr": "10.0.0.1", 00:15:35.566 "trsvcid": "40196" 00:15:35.566 }, 00:15:35.566 "auth": { 00:15:35.566 "state": "completed", 00:15:35.566 "digest": "sha512", 00:15:35.566 "dhgroup": "ffdhe6144" 00:15:35.566 } 00:15:35.566 } 00:15:35.566 ]' 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.566 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.824 13:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:15:36.770 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.770 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.770 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.770 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.770 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.770 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.770 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:36.770 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.027 13:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.591 00:15:37.591 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.591 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.591 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.848 { 00:15:37.848 "cntlid": 131, 00:15:37.848 "qid": 0, 00:15:37.848 "state": "enabled", 00:15:37.848 "thread": "nvmf_tgt_poll_group_000", 00:15:37.848 "listen_address": { 00:15:37.848 "trtype": "TCP", 00:15:37.848 "adrfam": "IPv4", 00:15:37.848 "traddr": "10.0.0.2", 00:15:37.848 "trsvcid": "4420" 00:15:37.848 }, 00:15:37.848 "peer_address": { 00:15:37.848 "trtype": "TCP", 00:15:37.848 "adrfam": "IPv4", 00:15:37.848 "traddr": "10.0.0.1", 00:15:37.848 "trsvcid": "40220" 00:15:37.848 }, 00:15:37.848 "auth": { 00:15:37.848 "state": "completed", 00:15:37.848 "digest": "sha512", 00:15:37.848 "dhgroup": "ffdhe6144" 00:15:37.848 } 00:15:37.848 } 00:15:37.848 ]' 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.848 13:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.106 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:15:39.040 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.040 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.040 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.040 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.040 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.040 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.040 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:39.040 13:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.298 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.861 00:15:39.861 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.861 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.861 13:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.119 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.119 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.119 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.119 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.119 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.119 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.119 { 00:15:40.119 "cntlid": 133, 00:15:40.119 "qid": 0, 00:15:40.119 "state": "enabled", 00:15:40.119 "thread": "nvmf_tgt_poll_group_000", 00:15:40.119 "listen_address": { 00:15:40.119 "trtype": "TCP", 00:15:40.119 "adrfam": "IPv4", 00:15:40.119 "traddr": "10.0.0.2", 00:15:40.119 "trsvcid": "4420" 00:15:40.119 }, 00:15:40.119 "peer_address": { 00:15:40.119 "trtype": "TCP", 00:15:40.119 "adrfam": "IPv4", 00:15:40.119 "traddr": "10.0.0.1", 00:15:40.119 "trsvcid": "40240" 00:15:40.119 }, 00:15:40.119 "auth": { 00:15:40.119 "state": "completed", 00:15:40.119 "digest": "sha512", 00:15:40.119 "dhgroup": "ffdhe6144" 00:15:40.119 } 00:15:40.119 } 00:15:40.119 ]' 00:15:40.119 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.376 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.376 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.376 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.376 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.376 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.376 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.376 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.633 13:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:15:41.565 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.565 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.565 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.565 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.565 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.565 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.565 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:41.565 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.822 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.823 13:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:42.387 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.388 { 00:15:42.388 "cntlid": 135, 00:15:42.388 "qid": 0, 00:15:42.388 "state": "enabled", 00:15:42.388 "thread": "nvmf_tgt_poll_group_000", 00:15:42.388 "listen_address": { 00:15:42.388 "trtype": "TCP", 00:15:42.388 "adrfam": "IPv4", 00:15:42.388 "traddr": "10.0.0.2", 00:15:42.388 "trsvcid": "4420" 00:15:42.388 }, 00:15:42.388 "peer_address": { 00:15:42.388 "trtype": "TCP", 00:15:42.388 "adrfam": "IPv4", 00:15:42.388 "traddr": "10.0.0.1", 00:15:42.388 "trsvcid": "40274" 00:15:42.388 }, 00:15:42.388 "auth": { 00:15:42.388 "state": "completed", 00:15:42.388 "digest": "sha512", 00:15:42.388 "dhgroup": "ffdhe6144" 00:15:42.388 } 00:15:42.388 } 00:15:42.388 ]' 00:15:42.388 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.646 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:42.646 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.646 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.646 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.646 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.646 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.646 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.904 13:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:15:43.880 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.880 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.880 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.880 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.880 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.880 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.880 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.880 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:43.880 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.167 13:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.734 00:15:44.992 13:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.992 13:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.992 13:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.992 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.992 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.992 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.992 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.992 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.250 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.250 { 00:15:45.250 "cntlid": 137, 00:15:45.250 "qid": 0, 00:15:45.250 "state": "enabled", 00:15:45.250 "thread": "nvmf_tgt_poll_group_000", 00:15:45.250 "listen_address": { 00:15:45.250 "trtype": "TCP", 00:15:45.250 "adrfam": "IPv4", 00:15:45.250 "traddr": "10.0.0.2", 00:15:45.250 "trsvcid": "4420" 00:15:45.250 }, 00:15:45.250 "peer_address": { 00:15:45.250 "trtype": "TCP", 00:15:45.250 "adrfam": "IPv4", 00:15:45.250 "traddr": "10.0.0.1", 00:15:45.250 "trsvcid": "54732" 00:15:45.250 }, 00:15:45.250 "auth": { 00:15:45.250 "state": "completed", 00:15:45.250 "digest": "sha512", 00:15:45.250 "dhgroup": "ffdhe8192" 00:15:45.250 } 00:15:45.250 } 00:15:45.250 ]' 00:15:45.250 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.250 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.250 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.250 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.250 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.250 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.250 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.250 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.508 13:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:15:46.493 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.493 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.493 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.493 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.493 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.493 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.493 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.493 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.750 13:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.683 00:15:47.683 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.683 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.683 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.940 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.940 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.940 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.940 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.940 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.940 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.940 { 00:15:47.940 "cntlid": 139, 00:15:47.940 "qid": 0, 00:15:47.940 "state": "enabled", 00:15:47.940 "thread": "nvmf_tgt_poll_group_000", 00:15:47.941 "listen_address": { 00:15:47.941 "trtype": "TCP", 00:15:47.941 "adrfam": "IPv4", 00:15:47.941 "traddr": "10.0.0.2", 00:15:47.941 "trsvcid": "4420" 00:15:47.941 }, 00:15:47.941 "peer_address": { 00:15:47.941 "trtype": "TCP", 00:15:47.941 "adrfam": "IPv4", 00:15:47.941 "traddr": "10.0.0.1", 00:15:47.941 "trsvcid": "54744" 00:15:47.941 }, 00:15:47.941 "auth": { 00:15:47.941 "state": "completed", 00:15:47.941 "digest": "sha512", 00:15:47.941 "dhgroup": "ffdhe8192" 00:15:47.941 } 00:15:47.941 } 00:15:47.941 ]' 00:15:47.941 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.941 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.941 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.941 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.941 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.941 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.941 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.941 13:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.198 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MjJkZDYxMmQ2MDRiNjg5NjJlMmI4YTY4OGYwMDY3NWPkJHk/: --dhchap-ctrl-secret DHHC-1:02:YzYyZmEwNGNlYzYwN2M4MTQzZWJkNjQwNDJmMjQ1MDc4NGNkNzg1NWUyZDVkNjQzuOGhjA==: 00:15:49.130 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.130 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.130 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.130 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.130 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.130 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.130 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:49.130 13:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.388 13:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.322 00:15:50.322 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.322 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.322 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.322 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.322 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.322 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.322 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.322 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.322 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.322 { 00:15:50.322 "cntlid": 141, 00:15:50.322 "qid": 0, 00:15:50.322 "state": "enabled", 00:15:50.322 "thread": "nvmf_tgt_poll_group_000", 00:15:50.322 "listen_address": { 00:15:50.322 "trtype": "TCP", 00:15:50.322 "adrfam": "IPv4", 00:15:50.322 "traddr": "10.0.0.2", 00:15:50.322 "trsvcid": "4420" 00:15:50.322 }, 00:15:50.322 "peer_address": { 00:15:50.322 "trtype": "TCP", 00:15:50.322 "adrfam": "IPv4", 00:15:50.322 "traddr": "10.0.0.1", 00:15:50.322 "trsvcid": "54770" 00:15:50.322 }, 00:15:50.322 "auth": { 00:15:50.322 "state": "completed", 00:15:50.322 "digest": "sha512", 00:15:50.323 "dhgroup": "ffdhe8192" 00:15:50.323 } 00:15:50.323 } 00:15:50.323 ]' 00:15:50.323 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.581 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.581 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.581 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:50.581 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.581 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.581 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.581 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.840 13:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:ZjkxNWIxYmZlZDgwYWRhZjkxOWUxMmYxMzc5ODg3OWMyMTZkNjc0ZmI3M2YxODIx/I0Q8Q==: --dhchap-ctrl-secret DHHC-1:01:YjE4YTQwZmE2NGMwYWMwNWNkZDNhNjg0YjhkYTExZTEaz+PP: 00:15:51.775 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.775 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.775 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.775 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.775 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.775 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.775 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:51.775 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.033 13:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.967 00:15:52.967 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.967 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.967 13:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.226 { 00:15:53.226 "cntlid": 143, 00:15:53.226 "qid": 0, 00:15:53.226 "state": "enabled", 00:15:53.226 "thread": "nvmf_tgt_poll_group_000", 00:15:53.226 "listen_address": { 00:15:53.226 "trtype": "TCP", 00:15:53.226 "adrfam": "IPv4", 00:15:53.226 "traddr": "10.0.0.2", 00:15:53.226 "trsvcid": "4420" 00:15:53.226 }, 00:15:53.226 "peer_address": { 00:15:53.226 "trtype": "TCP", 00:15:53.226 "adrfam": "IPv4", 00:15:53.226 "traddr": "10.0.0.1", 00:15:53.226 "trsvcid": "43966" 00:15:53.226 }, 00:15:53.226 "auth": { 00:15:53.226 "state": "completed", 00:15:53.226 "digest": "sha512", 00:15:53.226 "dhgroup": "ffdhe8192" 00:15:53.226 } 00:15:53.226 } 00:15:53.226 ]' 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.226 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.485 13:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:54.419 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.678 13:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.612 00:15:55.612 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.612 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.612 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.871 { 00:15:55.871 "cntlid": 145, 00:15:55.871 "qid": 0, 00:15:55.871 "state": "enabled", 00:15:55.871 "thread": "nvmf_tgt_poll_group_000", 00:15:55.871 "listen_address": { 00:15:55.871 "trtype": "TCP", 00:15:55.871 "adrfam": "IPv4", 00:15:55.871 "traddr": "10.0.0.2", 00:15:55.871 "trsvcid": "4420" 00:15:55.871 }, 00:15:55.871 "peer_address": { 00:15:55.871 "trtype": "TCP", 00:15:55.871 "adrfam": "IPv4", 00:15:55.871 "traddr": "10.0.0.1", 00:15:55.871 "trsvcid": "43990" 00:15:55.871 }, 00:15:55.871 "auth": { 00:15:55.871 "state": "completed", 00:15:55.871 "digest": "sha512", 00:15:55.871 "dhgroup": "ffdhe8192" 00:15:55.871 } 00:15:55.871 } 00:15:55.871 ]' 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.871 13:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.129 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZmI5YWMwZTdiYTUyMmQwOGI4MmE4YTYxODRmYWFjZDM3NWI2YTI4NTkyOTIyOGE0Mc2IAg==: --dhchap-ctrl-secret DHHC-1:03:ZjRlODQ3NjQwN2ZiMzU0OTM3NmM2YmYwYjNiMGFjMmYyNWI1MTgxYmNjZmNkZDJiNDVhM2M2YzUyYjRkM2YxYgA4Ht0=: 00:15:57.060 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.060 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:57.061 13:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:57.994 request: 00:15:57.994 { 00:15:57.994 "name": "nvme0", 00:15:57.994 "trtype": "tcp", 00:15:57.994 "traddr": "10.0.0.2", 00:15:57.994 "adrfam": "ipv4", 00:15:57.994 "trsvcid": "4420", 00:15:57.994 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:57.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:57.994 "prchk_reftag": false, 00:15:57.994 "prchk_guard": false, 00:15:57.994 "hdgst": false, 00:15:57.994 "ddgst": false, 00:15:57.994 "dhchap_key": "key2", 00:15:57.994 "method": "bdev_nvme_attach_controller", 00:15:57.994 "req_id": 1 00:15:57.994 } 00:15:57.994 Got JSON-RPC error response 00:15:57.994 response: 00:15:57.994 { 00:15:57.994 "code": -5, 00:15:57.994 "message": "Input/output error" 00:15:57.994 } 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:57.994 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.995 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:57.995 13:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:58.561 request: 00:15:58.561 { 00:15:58.561 "name": "nvme0", 00:15:58.561 "trtype": "tcp", 00:15:58.561 "traddr": "10.0.0.2", 00:15:58.561 "adrfam": "ipv4", 00:15:58.561 "trsvcid": "4420", 00:15:58.561 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:58.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:58.561 "prchk_reftag": false, 00:15:58.561 "prchk_guard": false, 00:15:58.561 "hdgst": false, 00:15:58.561 "ddgst": false, 00:15:58.561 "dhchap_key": "key1", 00:15:58.561 "dhchap_ctrlr_key": "ckey2", 00:15:58.561 "method": "bdev_nvme_attach_controller", 00:15:58.561 "req_id": 1 00:15:58.561 } 00:15:58.561 Got JSON-RPC error response 00:15:58.561 response: 00:15:58.561 { 00:15:58.561 "code": -5, 00:15:58.561 "message": "Input/output error" 00:15:58.561 } 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.561 13:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.494 request: 00:15:59.494 { 00:15:59.494 "name": "nvme0", 00:15:59.494 "trtype": "tcp", 00:15:59.494 "traddr": "10.0.0.2", 00:15:59.494 "adrfam": "ipv4", 00:15:59.494 "trsvcid": "4420", 00:15:59.494 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:59.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:59.494 "prchk_reftag": false, 00:15:59.494 "prchk_guard": false, 00:15:59.494 "hdgst": false, 00:15:59.494 "ddgst": false, 00:15:59.494 "dhchap_key": "key1", 00:15:59.494 "dhchap_ctrlr_key": "ckey1", 00:15:59.494 "method": "bdev_nvme_attach_controller", 00:15:59.494 "req_id": 1 00:15:59.494 } 00:15:59.494 Got JSON-RPC error response 00:15:59.494 response: 00:15:59.494 { 00:15:59.494 "code": -5, 00:15:59.494 "message": "Input/output error" 00:15:59.494 } 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 551546 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 551546 ']' 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 551546 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 551546 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 551546' 00:15:59.494 killing process with pid 551546 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 551546 00:15:59.494 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 551546 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=573234 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 573234 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 573234 ']' 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.753 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 573234 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 573234 ']' 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.011 13:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.271 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.271 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:00.271 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:00.271 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.271 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.528 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.528 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:00.528 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.528 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.528 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:00.528 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:00.528 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.528 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:00.528 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.529 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.529 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.529 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.529 13:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.461 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.461 { 00:16:01.461 "cntlid": 1, 00:16:01.461 "qid": 0, 00:16:01.461 "state": "enabled", 00:16:01.461 "thread": "nvmf_tgt_poll_group_000", 00:16:01.461 "listen_address": { 00:16:01.461 "trtype": "TCP", 00:16:01.461 "adrfam": "IPv4", 00:16:01.461 "traddr": "10.0.0.2", 00:16:01.461 "trsvcid": "4420" 00:16:01.461 }, 00:16:01.461 "peer_address": { 00:16:01.461 "trtype": "TCP", 00:16:01.461 "adrfam": "IPv4", 00:16:01.461 "traddr": "10.0.0.1", 00:16:01.461 "trsvcid": "44064" 00:16:01.461 }, 00:16:01.461 "auth": { 00:16:01.461 "state": "completed", 00:16:01.461 "digest": "sha512", 00:16:01.461 "dhgroup": "ffdhe8192" 00:16:01.461 } 00:16:01.461 } 00:16:01.461 ]' 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.461 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.462 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.719 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.719 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.719 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.719 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.719 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.976 13:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkYTAwYWZjYzcwMGYwMzQxNTJkODZlNjE0YmNiNWUzYWZkYWVlZmQ0M2IyM2Y1ZTYwMmQ0YzU4NmExN2ZkMrPSnvI=: 00:16:02.945 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.945 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.945 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.946 13:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.204 request: 00:16:03.204 { 00:16:03.204 "name": "nvme0", 00:16:03.204 "trtype": "tcp", 00:16:03.204 "traddr": "10.0.0.2", 00:16:03.204 "adrfam": "ipv4", 00:16:03.204 "trsvcid": "4420", 00:16:03.204 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:03.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:03.204 "prchk_reftag": false, 00:16:03.204 "prchk_guard": false, 00:16:03.204 "hdgst": false, 00:16:03.204 "ddgst": false, 00:16:03.204 "dhchap_key": "key3", 00:16:03.204 "method": "bdev_nvme_attach_controller", 00:16:03.204 "req_id": 1 00:16:03.204 } 00:16:03.204 Got JSON-RPC error response 00:16:03.204 response: 00:16:03.204 { 00:16:03.204 "code": -5, 00:16:03.204 "message": "Input/output error" 00:16:03.204 } 00:16:03.462 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:03.462 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:03.462 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:03.462 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:03.462 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:03.462 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:03.462 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:03.462 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:03.720 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.720 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:03.720 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.720 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:03.720 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.720 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:03.720 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.720 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.720 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.979 request: 00:16:03.979 { 00:16:03.979 "name": "nvme0", 00:16:03.979 "trtype": "tcp", 00:16:03.979 "traddr": "10.0.0.2", 00:16:03.979 "adrfam": "ipv4", 00:16:03.979 "trsvcid": "4420", 00:16:03.979 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:03.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:03.979 "prchk_reftag": false, 00:16:03.979 "prchk_guard": false, 00:16:03.979 "hdgst": false, 00:16:03.979 "ddgst": false, 00:16:03.979 "dhchap_key": "key3", 00:16:03.979 "method": "bdev_nvme_attach_controller", 00:16:03.979 "req_id": 1 00:16:03.979 } 00:16:03.979 Got JSON-RPC error response 00:16:03.979 response: 00:16:03.979 { 00:16:03.979 "code": -5, 00:16:03.979 "message": "Input/output error" 00:16:03.979 } 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:03.979 13:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:04.237 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:04.495 request: 00:16:04.495 { 00:16:04.495 "name": "nvme0", 00:16:04.495 "trtype": "tcp", 00:16:04.495 "traddr": "10.0.0.2", 00:16:04.495 "adrfam": "ipv4", 00:16:04.495 "trsvcid": "4420", 00:16:04.495 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:04.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:04.495 "prchk_reftag": false, 00:16:04.495 "prchk_guard": false, 00:16:04.495 "hdgst": false, 00:16:04.495 "ddgst": false, 00:16:04.495 "dhchap_key": "key0", 00:16:04.495 "dhchap_ctrlr_key": "key1", 00:16:04.495 "method": "bdev_nvme_attach_controller", 00:16:04.495 "req_id": 1 00:16:04.495 } 00:16:04.495 Got JSON-RPC error response 00:16:04.495 response: 00:16:04.495 { 00:16:04.495 "code": -5, 00:16:04.495 "message": "Input/output error" 00:16:04.495 } 00:16:04.495 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:04.495 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.495 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.495 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.495 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:04.495 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:04.753 00:16:04.753 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:04.753 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.753 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:05.011 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.011 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.011 13:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 551681 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 551681 ']' 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 551681 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 551681 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 551681' 00:16:05.268 killing process with pid 551681 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 551681 00:16:05.268 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 551681 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.912 rmmod nvme_tcp 00:16:05.912 rmmod nvme_fabrics 00:16:05.912 rmmod nvme_keyring 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 573234 ']' 00:16:05.912 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 573234 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 573234 ']' 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 573234 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 573234 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 573234' 00:16:05.913 killing process with pid 573234 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 573234 00:16:05.913 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 573234 00:16:06.174 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.174 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.174 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.174 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.174 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.174 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.174 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.174 13:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.084 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.084 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.OmS /tmp/spdk.key-sha256.x7P /tmp/spdk.key-sha384.odU /tmp/spdk.key-sha512.Ch0 /tmp/spdk.key-sha512.7F8 /tmp/spdk.key-sha384.Jzt /tmp/spdk.key-sha256.wNE '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:16:08.084 00:16:08.084 real 3m0.882s 00:16:08.084 user 7m2.731s 00:16:08.084 sys 0m25.049s 00:16:08.084 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.084 13:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.084 ************************************ 00:16:08.084 END TEST nvmf_auth_target 00:16:08.084 ************************************ 00:16:08.084 13:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:08.084 13:45:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:08.084 13:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:08.084 13:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:08.085 ************************************ 00:16:08.085 START TEST nvmf_bdevio_no_huge 00:16:08.085 ************************************ 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:08.085 * Looking for test storage... 00:16:08.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.085 13:45:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:10.617 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:10.617 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:10.617 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.617 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:10.618 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:10.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:16:10.618 00:16:10.618 --- 10.0.0.2 ping statistics --- 00:16:10.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.618 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:16:10.618 00:16:10.618 --- 10.0.0.1 ping statistics --- 00:16:10.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.618 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=576268 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 576268 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 576268 ']' 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:10.618 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:10.618 [2024-07-25 13:45:07.416435] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:10.618 [2024-07-25 13:45:07.416528] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:10.618 [2024-07-25 13:45:07.488869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.618 [2024-07-25 13:45:07.596344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.618 [2024-07-25 13:45:07.596435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.618 [2024-07-25 13:45:07.596449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.618 [2024-07-25 13:45:07.596460] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.618 [2024-07-25 13:45:07.596469] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.618 [2024-07-25 13:45:07.596559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:10.618 [2024-07-25 13:45:07.596622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:10.618 [2024-07-25 13:45:07.596703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:10.618 [2024-07-25 13:45:07.596705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:10.877 [2024-07-25 13:45:07.719147] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:10.877 Malloc0 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:10.877 [2024-07-25 13:45:07.757425] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:10.877 { 00:16:10.877 "params": { 00:16:10.877 "name": "Nvme$subsystem", 00:16:10.877 "trtype": "$TEST_TRANSPORT", 00:16:10.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.877 "adrfam": "ipv4", 00:16:10.877 "trsvcid": "$NVMF_PORT", 00:16:10.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.877 "hdgst": ${hdgst:-false}, 00:16:10.877 "ddgst": ${ddgst:-false} 00:16:10.877 }, 00:16:10.877 "method": "bdev_nvme_attach_controller" 00:16:10.877 } 00:16:10.877 EOF 00:16:10.877 )") 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:10.877 13:45:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:10.877 "params": { 00:16:10.877 "name": "Nvme1", 00:16:10.877 "trtype": "tcp", 00:16:10.877 "traddr": "10.0.0.2", 00:16:10.877 "adrfam": "ipv4", 00:16:10.877 "trsvcid": "4420", 00:16:10.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:10.877 "hdgst": false, 00:16:10.877 "ddgst": false 00:16:10.877 }, 00:16:10.877 "method": "bdev_nvme_attach_controller" 00:16:10.877 }' 00:16:10.877 [2024-07-25 13:45:07.806611] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:10.877 [2024-07-25 13:45:07.806700] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid576386 ] 00:16:10.877 [2024-07-25 13:45:07.874884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.135 [2024-07-25 13:45:07.989474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.135 [2024-07-25 13:45:07.989525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.135 [2024-07-25 13:45:07.989528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.135 I/O targets: 00:16:11.135 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:11.135 00:16:11.135 00:16:11.135 CUnit - A unit testing framework for C - Version 2.1-3 00:16:11.135 http://cunit.sourceforge.net/ 00:16:11.135 00:16:11.135 00:16:11.135 Suite: bdevio tests on: Nvme1n1 00:16:11.393 Test: blockdev write read block ...passed 00:16:11.393 Test: blockdev write zeroes read block ...passed 00:16:11.393 Test: blockdev write zeroes read no split ...passed 00:16:11.393 Test: blockdev write zeroes read split ...passed 00:16:11.393 Test: blockdev write zeroes read split partial ...passed 00:16:11.393 Test: blockdev reset ...[2024-07-25 13:45:08.271504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:11.393 [2024-07-25 13:45:08.271617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bdfb0 (9): Bad file descriptor 00:16:11.393 [2024-07-25 13:45:08.291357] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:11.393 passed 00:16:11.393 Test: blockdev write read 8 blocks ...passed 00:16:11.393 Test: blockdev write read size > 128k ...passed 00:16:11.393 Test: blockdev write read invalid size ...passed 00:16:11.393 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:11.393 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:11.393 Test: blockdev write read max offset ...passed 00:16:11.393 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:11.651 Test: blockdev writev readv 8 blocks ...passed 00:16:11.651 Test: blockdev writev readv 30 x 1block ...passed 00:16:11.651 Test: blockdev writev readv block ...passed 00:16:11.651 Test: blockdev writev readv size > 128k ...passed 00:16:11.651 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:11.651 Test: blockdev comparev and writev ...[2024-07-25 13:45:08.505180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.651 [2024-07-25 13:45:08.505216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.505249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.651 [2024-07-25 13:45:08.505266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.505618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.651 [2024-07-25 13:45:08.505642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.505665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.651 [2024-07-25 13:45:08.505681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.506034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.651 [2024-07-25 13:45:08.506066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.506091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.651 [2024-07-25 13:45:08.506117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.506477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.651 [2024-07-25 13:45:08.506501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.506524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.651 [2024-07-25 13:45:08.506540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:11.651 passed 00:16:11.651 Test: blockdev nvme passthru rw ...passed 00:16:11.651 Test: blockdev nvme passthru vendor specific ...[2024-07-25 13:45:08.590321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:11.651 [2024-07-25 13:45:08.590357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.590519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:11.651 [2024-07-25 13:45:08.590542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.590689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:11.651 [2024-07-25 13:45:08.590713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:11.651 [2024-07-25 13:45:08.590847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:11.651 [2024-07-25 13:45:08.590871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:11.651 passed 00:16:11.651 Test: blockdev nvme admin passthru ...passed 00:16:11.651 Test: blockdev copy ...passed 00:16:11.651 00:16:11.651 Run Summary: Type Total Ran Passed Failed Inactive 00:16:11.652 suites 1 1 n/a 0 0 00:16:11.652 tests 23 23 23 0 0 00:16:11.652 asserts 152 152 152 0 n/a 00:16:11.652 00:16:11.652 Elapsed time = 0.983 seconds 00:16:12.218 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.218 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.218 13:45:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.218 rmmod nvme_tcp 00:16:12.218 rmmod nvme_fabrics 00:16:12.218 rmmod nvme_keyring 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 576268 ']' 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 576268 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 576268 ']' 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 576268 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 576268 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 576268' 00:16:12.218 killing process with pid 576268 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 576268 00:16:12.218 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 576268 00:16:12.476 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.476 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.476 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.476 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.476 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.476 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.476 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.476 13:45:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:15.008 00:16:15.008 real 0m6.473s 00:16:15.008 user 0m9.785s 00:16:15.008 sys 0m2.528s 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:15.008 ************************************ 00:16:15.008 END TEST nvmf_bdevio_no_huge 00:16:15.008 ************************************ 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:15.008 ************************************ 00:16:15.008 START TEST nvmf_tls 00:16:15.008 ************************************ 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:15.008 * Looking for test storage... 00:16:15.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.008 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:16:15.009 13:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:16.910 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:16.910 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:16.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.910 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:16.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:16.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:16:16.911 00:16:16.911 --- 10.0.0.2 ping statistics --- 00:16:16.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.911 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:16:16.911 00:16:16.911 --- 10.0.0.1 ping statistics --- 00:16:16.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.911 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=578724 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 578724 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 578724 ']' 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.911 13:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:16.911 [2024-07-25 13:45:13.914429] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:16.911 [2024-07-25 13:45:13.914523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.170 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.170 [2024-07-25 13:45:13.978728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.170 [2024-07-25 13:45:14.087296] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.170 [2024-07-25 13:45:14.087357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.170 [2024-07-25 13:45:14.087371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.170 [2024-07-25 13:45:14.087382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.170 [2024-07-25 13:45:14.087391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.170 [2024-07-25 13:45:14.087425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.170 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.170 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:17.170 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.170 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.170 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.170 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.170 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:17.170 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:17.427 true 00:16:17.427 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:17.427 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:17.684 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:17.684 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:17.684 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:17.941 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:17.941 13:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:18.198 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:18.198 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:18.198 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:18.455 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:18.455 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:18.714 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:18.714 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:18.714 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:18.714 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:18.972 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:18.972 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:18.972 13:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:19.230 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:19.230 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:19.488 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:19.488 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:19.488 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:19.746 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:19.746 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:20.004 13:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:20.004 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:20.004 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:20.262 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.43YTnXYNR2 00:16:20.262 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:20.262 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.0ZedZkEPxx 00:16:20.262 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:20.262 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:20.262 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.43YTnXYNR2 00:16:20.262 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.0ZedZkEPxx 00:16:20.262 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:20.521 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:20.778 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.43YTnXYNR2 00:16:20.778 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.43YTnXYNR2 00:16:20.779 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:21.036 [2024-07-25 13:45:17.925978] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.036 13:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:21.293 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:21.550 [2024-07-25 13:45:18.427372] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:21.551 [2024-07-25 13:45:18.427603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.551 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:21.809 malloc0 00:16:21.809 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:22.067 13:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.43YTnXYNR2 00:16:22.325 [2024-07-25 13:45:19.160185] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:22.325 13:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.43YTnXYNR2 00:16:22.325 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.301 Initializing NVMe Controllers 00:16:32.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:32.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:32.301 Initialization complete. Launching workers. 00:16:32.301 ======================================================== 00:16:32.301 Latency(us) 00:16:32.301 Device Information : IOPS MiB/s Average min max 00:16:32.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8782.80 34.31 7288.91 1052.01 8943.61 00:16:32.301 ======================================================== 00:16:32.301 Total : 8782.80 34.31 7288.91 1052.01 8943.61 00:16:32.301 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.43YTnXYNR2 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.43YTnXYNR2' 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=580607 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 580607 /var/tmp/bdevperf.sock 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 580607 ']' 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.301 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.561 [2024-07-25 13:45:29.339182] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:32.561 [2024-07-25 13:45:29.339271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid580607 ] 00:16:32.561 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.561 [2024-07-25 13:45:29.398088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.561 [2024-07-25 13:45:29.505343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.819 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.819 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:32.819 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.43YTnXYNR2 00:16:33.077 [2024-07-25 13:45:29.878725] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:33.077 [2024-07-25 13:45:29.878859] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:33.077 TLSTESTn1 00:16:33.077 13:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:33.077 Running I/O for 10 seconds... 00:16:45.313 00:16:45.313 Latency(us) 00:16:45.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.313 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:45.313 Verification LBA range: start 0x0 length 0x2000 00:16:45.313 TLSTESTn1 : 10.02 3080.44 12.03 0.00 0.00 41480.36 7330.32 42137.22 00:16:45.313 =================================================================================================================== 00:16:45.313 Total : 3080.44 12.03 0.00 0.00 41480.36 7330.32 42137.22 00:16:45.313 0 00:16:45.313 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:45.313 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 580607 00:16:45.313 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 580607 ']' 00:16:45.313 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 580607 00:16:45.313 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:45.313 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.313 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 580607 00:16:45.313 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:45.313 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 580607' 00:16:45.314 killing process with pid 580607 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 580607 00:16:45.314 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.314 00:16:45.314 Latency(us) 00:16:45.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.314 =================================================================================================================== 00:16:45.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.314 [2024-07-25 13:45:40.178400] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 580607 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0ZedZkEPxx 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0ZedZkEPxx 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0ZedZkEPxx 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0ZedZkEPxx' 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=581929 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 581929 /var/tmp/bdevperf.sock 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 581929 ']' 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.314 [2024-07-25 13:45:40.481491] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:45.314 [2024-07-25 13:45:40.481579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581929 ] 00:16:45.314 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.314 [2024-07-25 13:45:40.539427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.314 [2024-07-25 13:45:40.641438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:45.314 13:45:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0ZedZkEPxx 00:16:45.314 [2024-07-25 13:45:41.031083] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.314 [2024-07-25 13:45:41.031226] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:45.314 [2024-07-25 13:45:41.038128] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:45.314 [2024-07-25 13:45:41.038176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a3f90 (107): Transport endpoint is not connected 00:16:45.314 [2024-07-25 13:45:41.039135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a3f90 (9): Bad file descriptor 00:16:45.314 [2024-07-25 13:45:41.040136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:45.314 [2024-07-25 13:45:41.040158] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:45.314 [2024-07-25 13:45:41.040177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:45.314 request: 00:16:45.314 { 00:16:45.314 "name": "TLSTEST", 00:16:45.314 "trtype": "tcp", 00:16:45.314 "traddr": "10.0.0.2", 00:16:45.314 "adrfam": "ipv4", 00:16:45.314 "trsvcid": "4420", 00:16:45.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:45.314 "prchk_reftag": false, 00:16:45.314 "prchk_guard": false, 00:16:45.314 "hdgst": false, 00:16:45.314 "ddgst": false, 00:16:45.314 "psk": "/tmp/tmp.0ZedZkEPxx", 00:16:45.314 "method": "bdev_nvme_attach_controller", 00:16:45.314 "req_id": 1 00:16:45.314 } 00:16:45.314 Got JSON-RPC error response 00:16:45.314 response: 00:16:45.314 { 00:16:45.314 "code": -5, 00:16:45.314 "message": "Input/output error" 00:16:45.314 } 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 581929 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 581929 ']' 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 581929 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 581929 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 581929' 00:16:45.314 killing process with pid 581929 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 581929 00:16:45.314 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.314 00:16:45.314 Latency(us) 00:16:45.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.314 =================================================================================================================== 00:16:45.314 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:45.314 [2024-07-25 13:45:41.091672] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 581929 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.43YTnXYNR2 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.43YTnXYNR2 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:45.314 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.43YTnXYNR2 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.43YTnXYNR2' 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=581955 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 581955 /var/tmp/bdevperf.sock 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 581955 ']' 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.315 [2024-07-25 13:45:41.396808] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:45.315 [2024-07-25 13:45:41.396895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581955 ] 00:16:45.315 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.315 [2024-07-25 13:45:41.458734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.315 [2024-07-25 13:45:41.570747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.43YTnXYNR2 00:16:45.315 [2024-07-25 13:45:41.956018] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.315 [2024-07-25 13:45:41.956168] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:45.315 [2024-07-25 13:45:41.961503] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:45.315 [2024-07-25 13:45:41.961538] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:45.315 [2024-07-25 13:45:41.961579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:45.315 [2024-07-25 13:45:41.962077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x619f90 (107): Transport endpoint is not connected 00:16:45.315 [2024-07-25 13:45:41.963057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x619f90 (9): Bad file descriptor 00:16:45.315 [2024-07-25 13:45:41.964064] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:45.315 [2024-07-25 13:45:41.964086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:45.315 [2024-07-25 13:45:41.964105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:45.315 request: 00:16:45.315 { 00:16:45.315 "name": "TLSTEST", 00:16:45.315 "trtype": "tcp", 00:16:45.315 "traddr": "10.0.0.2", 00:16:45.315 "adrfam": "ipv4", 00:16:45.315 "trsvcid": "4420", 00:16:45.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.315 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:45.315 "prchk_reftag": false, 00:16:45.315 "prchk_guard": false, 00:16:45.315 "hdgst": false, 00:16:45.315 "ddgst": false, 00:16:45.315 "psk": "/tmp/tmp.43YTnXYNR2", 00:16:45.315 "method": "bdev_nvme_attach_controller", 00:16:45.315 "req_id": 1 00:16:45.315 } 00:16:45.315 Got JSON-RPC error response 00:16:45.315 response: 00:16:45.315 { 00:16:45.315 "code": -5, 00:16:45.315 "message": "Input/output error" 00:16:45.315 } 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 581955 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 581955 ']' 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 581955 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.315 13:45:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 581955 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 581955' 00:16:45.315 killing process with pid 581955 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 581955 00:16:45.315 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.315 00:16:45.315 Latency(us) 00:16:45.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.315 =================================================================================================================== 00:16:45.315 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:45.315 [2024-07-25 13:45:42.016622] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 581955 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.43YTnXYNR2 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.43YTnXYNR2 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.43YTnXYNR2 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.43YTnXYNR2' 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=582086 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 582086 /var/tmp/bdevperf.sock 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 582086 ']' 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.315 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:45.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:45.316 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.316 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.316 [2024-07-25 13:45:42.324842] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:45.316 [2024-07-25 13:45:42.324931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582086 ] 00:16:45.574 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.574 [2024-07-25 13:45:42.385177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.574 [2024-07-25 13:45:42.493689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.574 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.574 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:45.574 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.43YTnXYNR2 00:16:45.833 [2024-07-25 13:45:42.830572] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.833 [2024-07-25 13:45:42.830721] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:45.833 [2024-07-25 13:45:42.842685] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:45.833 [2024-07-25 13:45:42.842733] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:45.833 [2024-07-25 13:45:42.842774] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:45.833 [2024-07-25 13:45:42.843713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20f90 (107): Transport endpoint is not connected 00:16:45.833 [2024-07-25 13:45:42.844703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d20f90 (9): Bad file descriptor 00:16:45.833 [2024-07-25 13:45:42.845703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:45.833 [2024-07-25 13:45:42.845723] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:45.833 [2024-07-25 13:45:42.845741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:45.833 request: 00:16:45.833 { 00:16:45.833 "name": "TLSTEST", 00:16:45.833 "trtype": "tcp", 00:16:45.833 "traddr": "10.0.0.2", 00:16:45.833 "adrfam": "ipv4", 00:16:45.833 "trsvcid": "4420", 00:16:45.833 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:45.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:45.834 "prchk_reftag": false, 00:16:45.834 "prchk_guard": false, 00:16:45.834 "hdgst": false, 00:16:45.834 "ddgst": false, 00:16:45.834 "psk": "/tmp/tmp.43YTnXYNR2", 00:16:45.834 "method": "bdev_nvme_attach_controller", 00:16:45.834 "req_id": 1 00:16:45.834 } 00:16:45.834 Got JSON-RPC error response 00:16:45.834 response: 00:16:45.834 { 00:16:45.834 "code": -5, 00:16:45.834 "message": "Input/output error" 00:16:45.834 } 00:16:45.834 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 582086 00:16:45.834 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 582086 ']' 00:16:45.834 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 582086 00:16:45.834 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:46.094 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.094 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 582086 00:16:46.094 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:46.094 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:46.094 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 582086' 00:16:46.094 killing process with pid 582086 00:16:46.094 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 582086 00:16:46.094 Received shutdown signal, test time was about 10.000000 seconds 00:16:46.094 00:16:46.094 Latency(us) 00:16:46.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.094 =================================================================================================================== 00:16:46.094 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:46.094 [2024-07-25 13:45:42.896382] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:46.094 13:45:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 582086 00:16:46.352 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:46.352 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:46.352 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:46.352 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:46.352 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:46.352 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:46.352 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:16:46.352 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:46.352 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=582221 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 582221 /var/tmp/bdevperf.sock 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 582221 ']' 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.353 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.353 [2024-07-25 13:45:43.205849] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:46.353 [2024-07-25 13:45:43.205935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582221 ] 00:16:46.353 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.353 [2024-07-25 13:45:43.264331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.353 [2024-07-25 13:45:43.366931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.610 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.610 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:46.610 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:46.869 [2024-07-25 13:45:43.735108] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:46.869 [2024-07-25 13:45:43.736817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a8770 (9): Bad file descriptor 00:16:46.869 [2024-07-25 13:45:43.737818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:46.869 [2024-07-25 13:45:43.737838] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:46.869 [2024-07-25 13:45:43.737865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:46.869 request: 00:16:46.869 { 00:16:46.869 "name": "TLSTEST", 00:16:46.869 "trtype": "tcp", 00:16:46.869 "traddr": "10.0.0.2", 00:16:46.869 "adrfam": "ipv4", 00:16:46.869 "trsvcid": "4420", 00:16:46.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:46.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:46.869 "prchk_reftag": false, 00:16:46.869 "prchk_guard": false, 00:16:46.869 "hdgst": false, 00:16:46.869 "ddgst": false, 00:16:46.869 "method": "bdev_nvme_attach_controller", 00:16:46.869 "req_id": 1 00:16:46.869 } 00:16:46.869 Got JSON-RPC error response 00:16:46.869 response: 00:16:46.869 { 00:16:46.869 "code": -5, 00:16:46.869 "message": "Input/output error" 00:16:46.869 } 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 582221 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 582221 ']' 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 582221 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 582221 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 582221' 00:16:46.870 killing process with pid 582221 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 582221 00:16:46.870 Received shutdown signal, test time was about 10.000000 seconds 00:16:46.870 00:16:46.870 Latency(us) 00:16:46.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.870 =================================================================================================================== 00:16:46.870 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:46.870 13:45:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 582221 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 578724 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 578724 ']' 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 578724 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 578724 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 578724' 00:16:47.130 killing process with pid 578724 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 578724 00:16:47.130 [2024-07-25 13:45:44.081213] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:47.130 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 578724 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.k7xfqTV6me 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:47.388 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.k7xfqTV6me 00:16:47.389 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:47.389 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.389 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:47.389 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.647 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=582378 00:16:47.647 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:47.647 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 582378 00:16:47.647 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 582378 ']' 00:16:47.647 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.647 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.647 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.647 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.647 13:45:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:47.647 [2024-07-25 13:45:44.476024] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:47.647 [2024-07-25 13:45:44.476131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.647 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.647 [2024-07-25 13:45:44.547615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.647 [2024-07-25 13:45:44.660189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.648 [2024-07-25 13:45:44.660272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.648 [2024-07-25 13:45:44.660286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.648 [2024-07-25 13:45:44.660299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.648 [2024-07-25 13:45:44.660309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.648 [2024-07-25 13:45:44.660345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.582 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.582 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:48.582 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.582 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:48.582 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.582 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.582 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.k7xfqTV6me 00:16:48.582 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.k7xfqTV6me 00:16:48.582 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:48.840 [2024-07-25 13:45:45.755041] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.840 13:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:49.097 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:49.355 [2024-07-25 13:45:46.268494] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:49.355 [2024-07-25 13:45:46.268754] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.355 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:49.613 malloc0 00:16:49.613 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:49.871 13:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k7xfqTV6me 00:16:50.130 [2024-07-25 13:45:47.089319] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.k7xfqTV6me 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.k7xfqTV6me' 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=582721 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 582721 /var/tmp/bdevperf.sock 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 582721 ']' 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.130 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.130 [2024-07-25 13:45:47.150902] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:50.130 [2024-07-25 13:45:47.150987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582721 ] 00:16:50.388 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.388 [2024-07-25 13:45:47.210902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.388 [2024-07-25 13:45:47.320219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.388 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.388 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:50.388 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k7xfqTV6me 00:16:50.647 [2024-07-25 13:45:47.650886] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:50.647 [2024-07-25 13:45:47.651017] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:50.906 TLSTESTn1 00:16:50.907 13:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:50.907 Running I/O for 10 seconds... 00:17:00.888 00:17:00.888 Latency(us) 00:17:00.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.888 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:00.888 Verification LBA range: start 0x0 length 0x2000 00:17:00.888 TLSTESTn1 : 10.02 3429.78 13.40 0.00 0.00 37252.75 6456.51 44855.75 00:17:00.888 =================================================================================================================== 00:17:00.888 Total : 3429.78 13.40 0.00 0.00 37252.75 6456.51 44855.75 00:17:00.888 0 00:17:00.888 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:00.888 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 582721 00:17:00.888 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 582721 ']' 00:17:00.888 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 582721 00:17:00.888 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:00.888 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.888 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 582721 00:17:01.147 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:01.147 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:01.147 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 582721' 00:17:01.147 killing process with pid 582721 00:17:01.147 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 582721 00:17:01.147 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.147 00:17:01.147 Latency(us) 00:17:01.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.147 =================================================================================================================== 00:17:01.147 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.147 [2024-07-25 13:45:57.931866] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:01.147 13:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 582721 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.k7xfqTV6me 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.k7xfqTV6me 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.k7xfqTV6me 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.k7xfqTV6me 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.k7xfqTV6me' 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=583983 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 583983 /var/tmp/bdevperf.sock 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 583983 ']' 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.406 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.406 [2024-07-25 13:45:58.252042] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:01.406 [2024-07-25 13:45:58.252147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid583983 ] 00:17:01.406 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.406 [2024-07-25 13:45:58.311152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.406 [2024-07-25 13:45:58.414683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.665 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:01.665 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:01.665 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k7xfqTV6me 00:17:01.924 [2024-07-25 13:45:58.762269] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.924 [2024-07-25 13:45:58.762362] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:01.924 [2024-07-25 13:45:58.762387] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.k7xfqTV6me 00:17:01.924 request: 00:17:01.924 { 00:17:01.924 "name": "TLSTEST", 00:17:01.924 "trtype": "tcp", 00:17:01.924 "traddr": "10.0.0.2", 00:17:01.924 "adrfam": "ipv4", 00:17:01.924 "trsvcid": "4420", 00:17:01.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:01.924 "prchk_reftag": false, 00:17:01.924 "prchk_guard": false, 00:17:01.924 "hdgst": false, 00:17:01.924 "ddgst": false, 00:17:01.924 "psk": "/tmp/tmp.k7xfqTV6me", 00:17:01.924 "method": "bdev_nvme_attach_controller", 00:17:01.924 "req_id": 1 00:17:01.924 } 00:17:01.924 Got JSON-RPC error response 00:17:01.924 response: 00:17:01.924 { 00:17:01.924 "code": -1, 00:17:01.924 "message": "Operation not permitted" 00:17:01.924 } 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 583983 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 583983 ']' 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 583983 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 583983 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 583983' 00:17:01.924 killing process with pid 583983 00:17:01.924 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 583983 00:17:01.925 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.925 00:17:01.925 Latency(us) 00:17:01.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.925 =================================================================================================================== 00:17:01.925 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:01.925 13:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 583983 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 582378 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 582378 ']' 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 582378 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 582378 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 582378' 00:17:02.189 killing process with pid 582378 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 582378 00:17:02.189 [2024-07-25 13:45:59.100974] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:02.189 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 582378 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=584132 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 584132 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 584132 ']' 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.454 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.454 [2024-07-25 13:45:59.442251] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:02.454 [2024-07-25 13:45:59.442330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.454 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.713 [2024-07-25 13:45:59.514235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.713 [2024-07-25 13:45:59.624371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.713 [2024-07-25 13:45:59.624430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.713 [2024-07-25 13:45:59.624444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.713 [2024-07-25 13:45:59.624456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.713 [2024-07-25 13:45:59.624465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.713 [2024-07-25 13:45:59.624499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.713 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.713 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:02.713 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.713 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:02.713 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.k7xfqTV6me 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.k7xfqTV6me 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.k7xfqTV6me 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.k7xfqTV6me 00:17:02.972 13:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:02.972 [2024-07-25 13:45:59.987272] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.231 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:03.231 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:03.797 [2024-07-25 13:46:00.536760] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:03.797 [2024-07-25 13:46:00.536973] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.797 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:03.797 malloc0 00:17:03.798 13:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:04.056 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k7xfqTV6me 00:17:04.315 [2024-07-25 13:46:01.293596] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:04.315 [2024-07-25 13:46:01.293638] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:04.315 [2024-07-25 13:46:01.293678] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:04.315 request: 00:17:04.315 { 00:17:04.315 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.315 "host": "nqn.2016-06.io.spdk:host1", 00:17:04.315 "psk": "/tmp/tmp.k7xfqTV6me", 00:17:04.315 "method": "nvmf_subsystem_add_host", 00:17:04.315 "req_id": 1 00:17:04.315 } 00:17:04.315 Got JSON-RPC error response 00:17:04.315 response: 00:17:04.315 { 00:17:04.315 "code": -32603, 00:17:04.315 "message": "Internal error" 00:17:04.315 } 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 584132 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 584132 ']' 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 584132 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 584132 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 584132' 00:17:04.315 killing process with pid 584132 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 584132 00:17:04.315 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 584132 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.k7xfqTV6me 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=584426 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 584426 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 584426 ']' 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.884 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.884 [2024-07-25 13:46:01.678545] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:04.884 [2024-07-25 13:46:01.678642] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.884 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.884 [2024-07-25 13:46:01.740964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.884 [2024-07-25 13:46:01.837568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.884 [2024-07-25 13:46:01.837641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.884 [2024-07-25 13:46:01.837665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.884 [2024-07-25 13:46:01.837675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.884 [2024-07-25 13:46:01.837685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.884 [2024-07-25 13:46:01.837709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.143 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:05.143 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:05.143 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.143 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.143 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.143 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.143 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.k7xfqTV6me 00:17:05.143 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.k7xfqTV6me 00:17:05.143 13:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:05.401 [2024-07-25 13:46:02.211768] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.401 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:05.659 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:05.917 [2024-07-25 13:46:02.761297] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:05.917 [2024-07-25 13:46:02.761564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.917 13:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:06.175 malloc0 00:17:06.175 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:06.433 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k7xfqTV6me 00:17:06.691 [2024-07-25 13:46:03.561568] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=584711 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 584711 /var/tmp/bdevperf.sock 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 584711 ']' 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.691 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.691 [2024-07-25 13:46:03.625699] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:06.691 [2024-07-25 13:46:03.625781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid584711 ] 00:17:06.691 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.691 [2024-07-25 13:46:03.682760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.949 [2024-07-25 13:46:03.790465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.949 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.949 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:06.949 13:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k7xfqTV6me 00:17:07.209 [2024-07-25 13:46:04.184370] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:07.209 [2024-07-25 13:46:04.184502] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:07.467 TLSTESTn1 00:17:07.467 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:07.726 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:07.726 "subsystems": [ 00:17:07.726 { 00:17:07.726 "subsystem": "keyring", 00:17:07.726 "config": [] 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "subsystem": "iobuf", 00:17:07.726 "config": [ 00:17:07.726 { 00:17:07.726 "method": "iobuf_set_options", 00:17:07.726 "params": { 00:17:07.726 "small_pool_count": 8192, 00:17:07.726 "large_pool_count": 1024, 00:17:07.726 "small_bufsize": 8192, 00:17:07.726 "large_bufsize": 135168 00:17:07.726 } 00:17:07.726 } 00:17:07.726 ] 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "subsystem": "sock", 00:17:07.726 "config": [ 00:17:07.726 { 00:17:07.726 "method": "sock_set_default_impl", 00:17:07.726 "params": { 00:17:07.726 "impl_name": "posix" 00:17:07.726 } 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "method": "sock_impl_set_options", 00:17:07.726 "params": { 00:17:07.726 "impl_name": "ssl", 00:17:07.726 "recv_buf_size": 4096, 00:17:07.726 "send_buf_size": 4096, 00:17:07.726 "enable_recv_pipe": true, 00:17:07.726 "enable_quickack": false, 00:17:07.726 "enable_placement_id": 0, 00:17:07.726 "enable_zerocopy_send_server": true, 00:17:07.726 "enable_zerocopy_send_client": false, 00:17:07.726 "zerocopy_threshold": 0, 00:17:07.726 "tls_version": 0, 00:17:07.726 "enable_ktls": false 00:17:07.726 } 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "method": "sock_impl_set_options", 00:17:07.726 "params": { 00:17:07.726 "impl_name": "posix", 00:17:07.726 "recv_buf_size": 2097152, 00:17:07.726 "send_buf_size": 2097152, 00:17:07.726 "enable_recv_pipe": true, 00:17:07.726 "enable_quickack": false, 00:17:07.726 "enable_placement_id": 0, 00:17:07.726 "enable_zerocopy_send_server": true, 00:17:07.726 "enable_zerocopy_send_client": false, 00:17:07.726 "zerocopy_threshold": 0, 00:17:07.726 "tls_version": 0, 00:17:07.726 "enable_ktls": false 00:17:07.726 } 00:17:07.726 } 00:17:07.726 ] 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "subsystem": "vmd", 00:17:07.726 "config": [] 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "subsystem": "accel", 00:17:07.726 "config": [ 00:17:07.726 { 00:17:07.726 "method": "accel_set_options", 00:17:07.726 "params": { 00:17:07.726 "small_cache_size": 128, 00:17:07.726 "large_cache_size": 16, 00:17:07.726 "task_count": 2048, 00:17:07.726 "sequence_count": 2048, 00:17:07.726 "buf_count": 2048 00:17:07.726 } 00:17:07.726 } 00:17:07.726 ] 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "subsystem": "bdev", 00:17:07.726 "config": [ 00:17:07.726 { 00:17:07.726 "method": "bdev_set_options", 00:17:07.726 "params": { 00:17:07.726 "bdev_io_pool_size": 65535, 00:17:07.726 "bdev_io_cache_size": 256, 00:17:07.726 "bdev_auto_examine": true, 00:17:07.726 "iobuf_small_cache_size": 128, 00:17:07.726 "iobuf_large_cache_size": 16 00:17:07.726 } 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "method": "bdev_raid_set_options", 00:17:07.726 "params": { 00:17:07.726 "process_window_size_kb": 1024, 00:17:07.726 "process_max_bandwidth_mb_sec": 0 00:17:07.726 } 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "method": "bdev_iscsi_set_options", 00:17:07.726 "params": { 00:17:07.726 "timeout_sec": 30 00:17:07.726 } 00:17:07.726 }, 00:17:07.726 { 00:17:07.726 "method": "bdev_nvme_set_options", 00:17:07.726 "params": { 00:17:07.727 "action_on_timeout": "none", 00:17:07.727 "timeout_us": 0, 00:17:07.727 "timeout_admin_us": 0, 00:17:07.727 "keep_alive_timeout_ms": 10000, 00:17:07.727 "arbitration_burst": 0, 00:17:07.727 "low_priority_weight": 0, 00:17:07.727 "medium_priority_weight": 0, 00:17:07.727 "high_priority_weight": 0, 00:17:07.727 "nvme_adminq_poll_period_us": 10000, 00:17:07.727 "nvme_ioq_poll_period_us": 0, 00:17:07.727 "io_queue_requests": 0, 00:17:07.727 "delay_cmd_submit": true, 00:17:07.727 "transport_retry_count": 4, 00:17:07.727 "bdev_retry_count": 3, 00:17:07.727 "transport_ack_timeout": 0, 00:17:07.727 "ctrlr_loss_timeout_sec": 0, 00:17:07.727 "reconnect_delay_sec": 0, 00:17:07.727 "fast_io_fail_timeout_sec": 0, 00:17:07.727 "disable_auto_failback": false, 00:17:07.727 "generate_uuids": false, 00:17:07.727 "transport_tos": 0, 00:17:07.727 "nvme_error_stat": false, 00:17:07.727 "rdma_srq_size": 0, 00:17:07.727 "io_path_stat": false, 00:17:07.727 "allow_accel_sequence": false, 00:17:07.727 "rdma_max_cq_size": 0, 00:17:07.727 "rdma_cm_event_timeout_ms": 0, 00:17:07.727 "dhchap_digests": [ 00:17:07.727 "sha256", 00:17:07.727 "sha384", 00:17:07.727 "sha512" 00:17:07.727 ], 00:17:07.727 "dhchap_dhgroups": [ 00:17:07.727 "null", 00:17:07.727 "ffdhe2048", 00:17:07.727 "ffdhe3072", 00:17:07.727 "ffdhe4096", 00:17:07.727 "ffdhe6144", 00:17:07.727 "ffdhe8192" 00:17:07.727 ] 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "bdev_nvme_set_hotplug", 00:17:07.727 "params": { 00:17:07.727 "period_us": 100000, 00:17:07.727 "enable": false 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "bdev_malloc_create", 00:17:07.727 "params": { 00:17:07.727 "name": "malloc0", 00:17:07.727 "num_blocks": 8192, 00:17:07.727 "block_size": 4096, 00:17:07.727 "physical_block_size": 4096, 00:17:07.727 "uuid": "2f58feb7-9061-4cef-bcd5-7601820c9a1f", 00:17:07.727 "optimal_io_boundary": 0, 00:17:07.727 "md_size": 0, 00:17:07.727 "dif_type": 0, 00:17:07.727 "dif_is_head_of_md": false, 00:17:07.727 "dif_pi_format": 0 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "bdev_wait_for_examine" 00:17:07.727 } 00:17:07.727 ] 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "subsystem": "nbd", 00:17:07.727 "config": [] 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "subsystem": "scheduler", 00:17:07.727 "config": [ 00:17:07.727 { 00:17:07.727 "method": "framework_set_scheduler", 00:17:07.727 "params": { 00:17:07.727 "name": "static" 00:17:07.727 } 00:17:07.727 } 00:17:07.727 ] 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "subsystem": "nvmf", 00:17:07.727 "config": [ 00:17:07.727 { 00:17:07.727 "method": "nvmf_set_config", 00:17:07.727 "params": { 00:17:07.727 "discovery_filter": "match_any", 00:17:07.727 "admin_cmd_passthru": { 00:17:07.727 "identify_ctrlr": false 00:17:07.727 } 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "nvmf_set_max_subsystems", 00:17:07.727 "params": { 00:17:07.727 "max_subsystems": 1024 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "nvmf_set_crdt", 00:17:07.727 "params": { 00:17:07.727 "crdt1": 0, 00:17:07.727 "crdt2": 0, 00:17:07.727 "crdt3": 0 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "nvmf_create_transport", 00:17:07.727 "params": { 00:17:07.727 "trtype": "TCP", 00:17:07.727 "max_queue_depth": 128, 00:17:07.727 "max_io_qpairs_per_ctrlr": 127, 00:17:07.727 "in_capsule_data_size": 4096, 00:17:07.727 "max_io_size": 131072, 00:17:07.727 "io_unit_size": 131072, 00:17:07.727 "max_aq_depth": 128, 00:17:07.727 "num_shared_buffers": 511, 00:17:07.727 "buf_cache_size": 4294967295, 00:17:07.727 "dif_insert_or_strip": false, 00:17:07.727 "zcopy": false, 00:17:07.727 "c2h_success": false, 00:17:07.727 "sock_priority": 0, 00:17:07.727 "abort_timeout_sec": 1, 00:17:07.727 "ack_timeout": 0, 00:17:07.727 "data_wr_pool_size": 0 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "nvmf_create_subsystem", 00:17:07.727 "params": { 00:17:07.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.727 "allow_any_host": false, 00:17:07.727 "serial_number": "SPDK00000000000001", 00:17:07.727 "model_number": "SPDK bdev Controller", 00:17:07.727 "max_namespaces": 10, 00:17:07.727 "min_cntlid": 1, 00:17:07.727 "max_cntlid": 65519, 00:17:07.727 "ana_reporting": false 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "nvmf_subsystem_add_host", 00:17:07.727 "params": { 00:17:07.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.727 "host": "nqn.2016-06.io.spdk:host1", 00:17:07.727 "psk": "/tmp/tmp.k7xfqTV6me" 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "nvmf_subsystem_add_ns", 00:17:07.727 "params": { 00:17:07.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.727 "namespace": { 00:17:07.727 "nsid": 1, 00:17:07.727 "bdev_name": "malloc0", 00:17:07.727 "nguid": "2F58FEB790614CEFBCD57601820C9A1F", 00:17:07.727 "uuid": "2f58feb7-9061-4cef-bcd5-7601820c9a1f", 00:17:07.727 "no_auto_visible": false 00:17:07.727 } 00:17:07.727 } 00:17:07.727 }, 00:17:07.727 { 00:17:07.727 "method": "nvmf_subsystem_add_listener", 00:17:07.727 "params": { 00:17:07.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.727 "listen_address": { 00:17:07.727 "trtype": "TCP", 00:17:07.727 "adrfam": "IPv4", 00:17:07.727 "traddr": "10.0.0.2", 00:17:07.727 "trsvcid": "4420" 00:17:07.727 }, 00:17:07.727 "secure_channel": true 00:17:07.727 } 00:17:07.727 } 00:17:07.727 ] 00:17:07.727 } 00:17:07.727 ] 00:17:07.727 }' 00:17:07.727 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:07.987 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:07.987 "subsystems": [ 00:17:07.987 { 00:17:07.987 "subsystem": "keyring", 00:17:07.987 "config": [] 00:17:07.987 }, 00:17:07.987 { 00:17:07.987 "subsystem": "iobuf", 00:17:07.987 "config": [ 00:17:07.987 { 00:17:07.987 "method": "iobuf_set_options", 00:17:07.988 "params": { 00:17:07.988 "small_pool_count": 8192, 00:17:07.988 "large_pool_count": 1024, 00:17:07.988 "small_bufsize": 8192, 00:17:07.988 "large_bufsize": 135168 00:17:07.988 } 00:17:07.988 } 00:17:07.988 ] 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "subsystem": "sock", 00:17:07.988 "config": [ 00:17:07.988 { 00:17:07.988 "method": "sock_set_default_impl", 00:17:07.988 "params": { 00:17:07.988 "impl_name": "posix" 00:17:07.988 } 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "method": "sock_impl_set_options", 00:17:07.988 "params": { 00:17:07.988 "impl_name": "ssl", 00:17:07.988 "recv_buf_size": 4096, 00:17:07.988 "send_buf_size": 4096, 00:17:07.988 "enable_recv_pipe": true, 00:17:07.988 "enable_quickack": false, 00:17:07.988 "enable_placement_id": 0, 00:17:07.988 "enable_zerocopy_send_server": true, 00:17:07.988 "enable_zerocopy_send_client": false, 00:17:07.988 "zerocopy_threshold": 0, 00:17:07.988 "tls_version": 0, 00:17:07.988 "enable_ktls": false 00:17:07.988 } 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "method": "sock_impl_set_options", 00:17:07.988 "params": { 00:17:07.988 "impl_name": "posix", 00:17:07.988 "recv_buf_size": 2097152, 00:17:07.988 "send_buf_size": 2097152, 00:17:07.988 "enable_recv_pipe": true, 00:17:07.988 "enable_quickack": false, 00:17:07.988 "enable_placement_id": 0, 00:17:07.988 "enable_zerocopy_send_server": true, 00:17:07.988 "enable_zerocopy_send_client": false, 00:17:07.988 "zerocopy_threshold": 0, 00:17:07.988 "tls_version": 0, 00:17:07.988 "enable_ktls": false 00:17:07.988 } 00:17:07.988 } 00:17:07.988 ] 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "subsystem": "vmd", 00:17:07.988 "config": [] 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "subsystem": "accel", 00:17:07.988 "config": [ 00:17:07.988 { 00:17:07.988 "method": "accel_set_options", 00:17:07.988 "params": { 00:17:07.988 "small_cache_size": 128, 00:17:07.988 "large_cache_size": 16, 00:17:07.988 "task_count": 2048, 00:17:07.988 "sequence_count": 2048, 00:17:07.988 "buf_count": 2048 00:17:07.988 } 00:17:07.988 } 00:17:07.988 ] 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "subsystem": "bdev", 00:17:07.988 "config": [ 00:17:07.988 { 00:17:07.988 "method": "bdev_set_options", 00:17:07.988 "params": { 00:17:07.988 "bdev_io_pool_size": 65535, 00:17:07.988 "bdev_io_cache_size": 256, 00:17:07.988 "bdev_auto_examine": true, 00:17:07.988 "iobuf_small_cache_size": 128, 00:17:07.988 "iobuf_large_cache_size": 16 00:17:07.988 } 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "method": "bdev_raid_set_options", 00:17:07.988 "params": { 00:17:07.988 "process_window_size_kb": 1024, 00:17:07.988 "process_max_bandwidth_mb_sec": 0 00:17:07.988 } 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "method": "bdev_iscsi_set_options", 00:17:07.988 "params": { 00:17:07.988 "timeout_sec": 30 00:17:07.988 } 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "method": "bdev_nvme_set_options", 00:17:07.988 "params": { 00:17:07.988 "action_on_timeout": "none", 00:17:07.988 "timeout_us": 0, 00:17:07.988 "timeout_admin_us": 0, 00:17:07.988 "keep_alive_timeout_ms": 10000, 00:17:07.988 "arbitration_burst": 0, 00:17:07.988 "low_priority_weight": 0, 00:17:07.988 "medium_priority_weight": 0, 00:17:07.988 "high_priority_weight": 0, 00:17:07.988 "nvme_adminq_poll_period_us": 10000, 00:17:07.988 "nvme_ioq_poll_period_us": 0, 00:17:07.988 "io_queue_requests": 512, 00:17:07.988 "delay_cmd_submit": true, 00:17:07.988 "transport_retry_count": 4, 00:17:07.988 "bdev_retry_count": 3, 00:17:07.988 "transport_ack_timeout": 0, 00:17:07.988 "ctrlr_loss_timeout_sec": 0, 00:17:07.988 "reconnect_delay_sec": 0, 00:17:07.988 "fast_io_fail_timeout_sec": 0, 00:17:07.988 "disable_auto_failback": false, 00:17:07.988 "generate_uuids": false, 00:17:07.988 "transport_tos": 0, 00:17:07.988 "nvme_error_stat": false, 00:17:07.988 "rdma_srq_size": 0, 00:17:07.988 "io_path_stat": false, 00:17:07.988 "allow_accel_sequence": false, 00:17:07.988 "rdma_max_cq_size": 0, 00:17:07.988 "rdma_cm_event_timeout_ms": 0, 00:17:07.988 "dhchap_digests": [ 00:17:07.988 "sha256", 00:17:07.988 "sha384", 00:17:07.988 "sha512" 00:17:07.988 ], 00:17:07.988 "dhchap_dhgroups": [ 00:17:07.988 "null", 00:17:07.988 "ffdhe2048", 00:17:07.988 "ffdhe3072", 00:17:07.988 "ffdhe4096", 00:17:07.988 "ffdhe6144", 00:17:07.988 "ffdhe8192" 00:17:07.988 ] 00:17:07.988 } 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "method": "bdev_nvme_attach_controller", 00:17:07.988 "params": { 00:17:07.988 "name": "TLSTEST", 00:17:07.988 "trtype": "TCP", 00:17:07.988 "adrfam": "IPv4", 00:17:07.988 "traddr": "10.0.0.2", 00:17:07.988 "trsvcid": "4420", 00:17:07.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.988 "prchk_reftag": false, 00:17:07.988 "prchk_guard": false, 00:17:07.988 "ctrlr_loss_timeout_sec": 0, 00:17:07.988 "reconnect_delay_sec": 0, 00:17:07.988 "fast_io_fail_timeout_sec": 0, 00:17:07.988 "psk": "/tmp/tmp.k7xfqTV6me", 00:17:07.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.988 "hdgst": false, 00:17:07.988 "ddgst": false 00:17:07.988 } 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "method": "bdev_nvme_set_hotplug", 00:17:07.988 "params": { 00:17:07.988 "period_us": 100000, 00:17:07.988 "enable": false 00:17:07.988 } 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "method": "bdev_wait_for_examine" 00:17:07.988 } 00:17:07.988 ] 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "subsystem": "nbd", 00:17:07.988 "config": [] 00:17:07.988 } 00:17:07.988 ] 00:17:07.988 }' 00:17:07.988 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 584711 00:17:07.988 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 584711 ']' 00:17:07.988 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 584711 00:17:07.988 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:07.988 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:07.988 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 584711 00:17:07.988 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:07.988 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:07.989 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 584711' 00:17:07.989 killing process with pid 584711 00:17:07.989 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 584711 00:17:07.989 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.989 00:17:07.989 Latency(us) 00:17:07.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.989 =================================================================================================================== 00:17:07.989 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:07.989 [2024-07-25 13:46:04.993295] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:07.989 13:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 584711 00:17:08.248 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 584426 00:17:08.248 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 584426 ']' 00:17:08.248 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 584426 00:17:08.248 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:08.248 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.248 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 584426 00:17:08.507 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:08.507 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:08.507 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 584426' 00:17:08.507 killing process with pid 584426 00:17:08.507 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 584426 00:17:08.507 [2024-07-25 13:46:05.285863] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:08.507 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 584426 00:17:08.765 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:08.765 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.765 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:08.765 "subsystems": [ 00:17:08.765 { 00:17:08.765 "subsystem": "keyring", 00:17:08.765 "config": [] 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "subsystem": "iobuf", 00:17:08.765 "config": [ 00:17:08.765 { 00:17:08.765 "method": "iobuf_set_options", 00:17:08.765 "params": { 00:17:08.765 "small_pool_count": 8192, 00:17:08.765 "large_pool_count": 1024, 00:17:08.765 "small_bufsize": 8192, 00:17:08.765 "large_bufsize": 135168 00:17:08.765 } 00:17:08.765 } 00:17:08.765 ] 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "subsystem": "sock", 00:17:08.765 "config": [ 00:17:08.765 { 00:17:08.765 "method": "sock_set_default_impl", 00:17:08.765 "params": { 00:17:08.765 "impl_name": "posix" 00:17:08.765 } 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "method": "sock_impl_set_options", 00:17:08.765 "params": { 00:17:08.765 "impl_name": "ssl", 00:17:08.765 "recv_buf_size": 4096, 00:17:08.765 "send_buf_size": 4096, 00:17:08.765 "enable_recv_pipe": true, 00:17:08.765 "enable_quickack": false, 00:17:08.765 "enable_placement_id": 0, 00:17:08.765 "enable_zerocopy_send_server": true, 00:17:08.765 "enable_zerocopy_send_client": false, 00:17:08.765 "zerocopy_threshold": 0, 00:17:08.765 "tls_version": 0, 00:17:08.765 "enable_ktls": false 00:17:08.765 } 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "method": "sock_impl_set_options", 00:17:08.765 "params": { 00:17:08.765 "impl_name": "posix", 00:17:08.765 "recv_buf_size": 2097152, 00:17:08.765 "send_buf_size": 2097152, 00:17:08.765 "enable_recv_pipe": true, 00:17:08.765 "enable_quickack": false, 00:17:08.765 "enable_placement_id": 0, 00:17:08.765 "enable_zerocopy_send_server": true, 00:17:08.765 "enable_zerocopy_send_client": false, 00:17:08.765 "zerocopy_threshold": 0, 00:17:08.765 "tls_version": 0, 00:17:08.765 "enable_ktls": false 00:17:08.765 } 00:17:08.765 } 00:17:08.765 ] 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "subsystem": "vmd", 00:17:08.765 "config": [] 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "subsystem": "accel", 00:17:08.765 "config": [ 00:17:08.765 { 00:17:08.765 "method": "accel_set_options", 00:17:08.765 "params": { 00:17:08.765 "small_cache_size": 128, 00:17:08.765 "large_cache_size": 16, 00:17:08.765 "task_count": 2048, 00:17:08.765 "sequence_count": 2048, 00:17:08.765 "buf_count": 2048 00:17:08.765 } 00:17:08.765 } 00:17:08.765 ] 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "subsystem": "bdev", 00:17:08.765 "config": [ 00:17:08.765 { 00:17:08.765 "method": "bdev_set_options", 00:17:08.765 "params": { 00:17:08.765 "bdev_io_pool_size": 65535, 00:17:08.765 "bdev_io_cache_size": 256, 00:17:08.765 "bdev_auto_examine": true, 00:17:08.765 "iobuf_small_cache_size": 128, 00:17:08.765 "iobuf_large_cache_size": 16 00:17:08.765 } 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "method": "bdev_raid_set_options", 00:17:08.765 "params": { 00:17:08.765 "process_window_size_kb": 1024, 00:17:08.765 "process_max_bandwidth_mb_sec": 0 00:17:08.765 } 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "method": "bdev_iscsi_set_options", 00:17:08.765 "params": { 00:17:08.765 "timeout_sec": 30 00:17:08.765 } 00:17:08.765 }, 00:17:08.765 { 00:17:08.765 "method": "bdev_nvme_set_options", 00:17:08.765 "params": { 00:17:08.765 "action_on_timeout": "none", 00:17:08.765 "timeout_us": 0, 00:17:08.766 "timeout_admin_us": 0, 00:17:08.766 "keep_alive_timeout_ms": 10000, 00:17:08.766 "arbitration_burst": 0, 00:17:08.766 "low_priority_weight": 0, 00:17:08.766 "medium_priority_weight": 0, 00:17:08.766 "high_priority_weight": 0, 00:17:08.766 "nvme_adminq_poll_period_us": 10000, 00:17:08.766 "nvme_ioq_poll_period_us": 0, 00:17:08.766 "io_queue_requests": 0, 00:17:08.766 "delay_cmd_submit": true, 00:17:08.766 "transport_retry_count": 4, 00:17:08.766 "bdev_retry_count": 3, 00:17:08.766 "transport_ack_timeout": 0, 00:17:08.766 "ctrlr_loss_timeout_sec": 0, 00:17:08.766 "reconnect_delay_sec": 0, 00:17:08.766 "fast_io_fail_timeout_sec": 0, 00:17:08.766 "disable_auto_failback": false, 00:17:08.766 "generate_uuids": false, 00:17:08.766 "transport_tos": 0, 00:17:08.766 "nvme_error_stat": false, 00:17:08.766 "rdma_srq_size": 0, 00:17:08.766 "io_path_stat": false, 00:17:08.766 "allow_accel_sequence": false, 00:17:08.766 "rdma_max_cq_size": 0, 00:17:08.766 "rdma_cm_event_timeout_ms": 0, 00:17:08.766 "dhchap_digests": [ 00:17:08.766 "sha256", 00:17:08.766 "sha384", 00:17:08.766 "sha512" 00:17:08.766 ], 00:17:08.766 "dhchap_dhgroups": [ 00:17:08.766 "null", 00:17:08.766 "ffdhe2048", 00:17:08.766 "ffdhe3072", 00:17:08.766 "ffdhe4096", 00:17:08.766 "ffdhe6144", 00:17:08.766 "ffdhe8192" 00:17:08.766 ] 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "bdev_nvme_set_hotplug", 00:17:08.766 "params": { 00:17:08.766 "period_us": 100000, 00:17:08.766 "enable": false 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "bdev_malloc_create", 00:17:08.766 "params": { 00:17:08.766 "name": "malloc0", 00:17:08.766 "num_blocks": 8192, 00:17:08.766 "block_size": 4096, 00:17:08.766 "physical_block_size": 4096, 00:17:08.766 "uuid": "2f58feb7-9061-4cef-bcd5-7601820c9a1f", 00:17:08.766 "optimal_io_boundary": 0, 00:17:08.766 "md_size": 0, 00:17:08.766 "dif_type": 0, 00:17:08.766 "dif_is_head_of_md": false, 00:17:08.766 "dif_pi_format": 0 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "bdev_wait_for_examine" 00:17:08.766 } 00:17:08.766 ] 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "subsystem": "nbd", 00:17:08.766 "config": [] 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "subsystem": "scheduler", 00:17:08.766 "config": [ 00:17:08.766 { 00:17:08.766 "method": "framework_set_scheduler", 00:17:08.766 "params": { 00:17:08.766 "name": "static" 00:17:08.766 } 00:17:08.766 } 00:17:08.766 ] 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "subsystem": "nvmf", 00:17:08.766 "config": [ 00:17:08.766 { 00:17:08.766 "method": "nvmf_set_config", 00:17:08.766 "params": { 00:17:08.766 "discovery_filter": "match_any", 00:17:08.766 "admin_cmd_passthru": { 00:17:08.766 "identify_ctrlr": false 00:17:08.766 } 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "nvmf_set_max_subsystems", 00:17:08.766 "params": { 00:17:08.766 "max_subsystems": 1024 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "nvmf_set_crdt", 00:17:08.766 "params": { 00:17:08.766 "crdt1": 0, 00:17:08.766 "crdt2": 0, 00:17:08.766 "crdt3": 0 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "nvmf_create_transport", 00:17:08.766 "params": { 00:17:08.766 "trtype": "TCP", 00:17:08.766 "max_queue_depth": 128, 00:17:08.766 "max_io_qpairs_per_ctrlr": 127, 00:17:08.766 "in_capsule_data_size": 4096, 00:17:08.766 "max_io_size": 131072, 00:17:08.766 "io_unit_size": 131072, 00:17:08.766 "max_aq_depth": 128, 00:17:08.766 "num_shared_buffers": 511, 00:17:08.766 "buf_cache_size": 4294967295, 00:17:08.766 "dif_insert_or_strip": false, 00:17:08.766 "zcopy": false, 00:17:08.766 "c2h_success": false, 00:17:08.766 "sock_priority": 0, 00:17:08.766 "abort_timeout_sec": 1, 00:17:08.766 "ack_timeout": 0, 00:17:08.766 "data_wr_pool_size": 0 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "nvmf_create_subsystem", 00:17:08.766 "params": { 00:17:08.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.766 "allow_any_host": false, 00:17:08.766 "serial_number": "SPDK00000000000001", 00:17:08.766 "model_number": "SPDK bdev Controller", 00:17:08.766 "max_namespaces": 10, 00:17:08.766 "min_cntlid": 1, 00:17:08.766 "max_cntlid": 65519, 00:17:08.766 "ana_reporting": false 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "nvmf_subsystem_add_host", 00:17:08.766 "params": { 00:17:08.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.766 "host": "nqn.2016-06.io.spdk:host1", 00:17:08.766 "psk": "/tmp/tmp.k7xfqTV6me" 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "nvmf_subsystem_add_ns", 00:17:08.766 "params": { 00:17:08.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.766 "namespace": { 00:17:08.766 "nsid": 1, 00:17:08.766 "bdev_name": "malloc0", 00:17:08.766 "nguid": "2F58FEB790614CEFBCD57601820C9A1F", 00:17:08.766 "uuid": "2f58feb7-9061-4cef-bcd5-7601820c9a1f", 00:17:08.766 "no_auto_visible": false 00:17:08.766 } 00:17:08.766 } 00:17:08.766 }, 00:17:08.766 { 00:17:08.766 "method": "nvmf_subsystem_add_listener", 00:17:08.766 "params": { 00:17:08.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.766 "listen_address": { 00:17:08.766 "trtype": "TCP", 00:17:08.766 "adrfam": "IPv4", 00:17:08.766 "traddr": "10.0.0.2", 00:17:08.766 "trsvcid": "4420" 00:17:08.766 }, 00:17:08.766 "secure_channel": true 00:17:08.766 } 00:17:08.766 } 00:17:08.766 ] 00:17:08.766 } 00:17:08.766 ] 00:17:08.766 }' 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=584993 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 584993 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 584993 ']' 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:08.766 13:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.766 [2024-07-25 13:46:05.605581] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:08.766 [2024-07-25 13:46:05.605660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.766 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.766 [2024-07-25 13:46:05.667888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.766 [2024-07-25 13:46:05.772398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.767 [2024-07-25 13:46:05.772454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.767 [2024-07-25 13:46:05.772477] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.767 [2024-07-25 13:46:05.772488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.767 [2024-07-25 13:46:05.772498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.767 [2024-07-25 13:46:05.772570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.026 [2024-07-25 13:46:05.991131] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.026 [2024-07-25 13:46:06.014415] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:09.026 [2024-07-25 13:46:06.030478] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:09.026 [2024-07-25 13:46:06.030725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=585139 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 585139 /var/tmp/bdevperf.sock 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 585139 ']' 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.596 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:09.596 "subsystems": [ 00:17:09.596 { 00:17:09.596 "subsystem": "keyring", 00:17:09.596 "config": [] 00:17:09.596 }, 00:17:09.596 { 00:17:09.596 "subsystem": "iobuf", 00:17:09.596 "config": [ 00:17:09.596 { 00:17:09.596 "method": "iobuf_set_options", 00:17:09.596 "params": { 00:17:09.596 "small_pool_count": 8192, 00:17:09.596 "large_pool_count": 1024, 00:17:09.596 "small_bufsize": 8192, 00:17:09.596 "large_bufsize": 135168 00:17:09.596 } 00:17:09.596 } 00:17:09.596 ] 00:17:09.596 }, 00:17:09.596 { 00:17:09.596 "subsystem": "sock", 00:17:09.596 "config": [ 00:17:09.596 { 00:17:09.596 "method": "sock_set_default_impl", 00:17:09.596 "params": { 00:17:09.596 "impl_name": "posix" 00:17:09.596 } 00:17:09.596 }, 00:17:09.596 { 00:17:09.596 "method": "sock_impl_set_options", 00:17:09.596 "params": { 00:17:09.596 "impl_name": "ssl", 00:17:09.596 "recv_buf_size": 4096, 00:17:09.597 "send_buf_size": 4096, 00:17:09.597 "enable_recv_pipe": true, 00:17:09.597 "enable_quickack": false, 00:17:09.597 "enable_placement_id": 0, 00:17:09.597 "enable_zerocopy_send_server": true, 00:17:09.597 "enable_zerocopy_send_client": false, 00:17:09.597 "zerocopy_threshold": 0, 00:17:09.597 "tls_version": 0, 00:17:09.597 "enable_ktls": false 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "sock_impl_set_options", 00:17:09.597 "params": { 00:17:09.597 "impl_name": "posix", 00:17:09.597 "recv_buf_size": 2097152, 00:17:09.597 "send_buf_size": 2097152, 00:17:09.597 "enable_recv_pipe": true, 00:17:09.597 "enable_quickack": false, 00:17:09.597 "enable_placement_id": 0, 00:17:09.597 "enable_zerocopy_send_server": true, 00:17:09.597 "enable_zerocopy_send_client": false, 00:17:09.597 "zerocopy_threshold": 0, 00:17:09.597 "tls_version": 0, 00:17:09.597 "enable_ktls": false 00:17:09.597 } 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "subsystem": "vmd", 00:17:09.597 "config": [] 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "subsystem": "accel", 00:17:09.597 "config": [ 00:17:09.597 { 00:17:09.597 "method": "accel_set_options", 00:17:09.597 "params": { 00:17:09.597 "small_cache_size": 128, 00:17:09.597 "large_cache_size": 16, 00:17:09.597 "task_count": 2048, 00:17:09.597 "sequence_count": 2048, 00:17:09.597 "buf_count": 2048 00:17:09.597 } 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "subsystem": "bdev", 00:17:09.597 "config": [ 00:17:09.597 { 00:17:09.597 "method": "bdev_set_options", 00:17:09.597 "params": { 00:17:09.597 "bdev_io_pool_size": 65535, 00:17:09.597 "bdev_io_cache_size": 256, 00:17:09.597 "bdev_auto_examine": true, 00:17:09.597 "iobuf_small_cache_size": 128, 00:17:09.597 "iobuf_large_cache_size": 16 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_raid_set_options", 00:17:09.597 "params": { 00:17:09.597 "process_window_size_kb": 1024, 00:17:09.597 "process_max_bandwidth_mb_sec": 0 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_iscsi_set_options", 00:17:09.597 "params": { 00:17:09.597 "timeout_sec": 30 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_nvme_set_options", 00:17:09.597 "params": { 00:17:09.597 "action_on_timeout": "none", 00:17:09.597 "timeout_us": 0, 00:17:09.597 "timeout_admin_us": 0, 00:17:09.597 "keep_alive_timeout_ms": 10000, 00:17:09.597 "arbitration_burst": 0, 00:17:09.597 "low_priority_weight": 0, 00:17:09.597 "medium_priority_weight": 0, 00:17:09.597 "high_priority_weight": 0, 00:17:09.597 "nvme_adminq_poll_period_us": 10000, 00:17:09.597 "nvme_ioq_poll_period_us": 0, 00:17:09.597 "io_queue_requests": 512, 00:17:09.597 "delay_cmd_submit": true, 00:17:09.597 "transport_retry_count": 4, 00:17:09.597 "bdev_retry_count": 3, 00:17:09.597 "transport_ack_timeout": 0, 00:17:09.597 "ctrlr_loss_timeout_sec": 0, 00:17:09.597 "reconnect_delay_sec": 0, 00:17:09.597 "fast_io_fail_timeout_sec": 0, 00:17:09.597 "disable_auto_failback": false, 00:17:09.597 "generate_uuids": false, 00:17:09.597 "transport_tos": 0, 00:17:09.597 "nvme_error_stat": false, 00:17:09.597 "rdma_srq_size": 0, 00:17:09.597 "io_path_stat": false, 00:17:09.597 "allow_accel_sequence": false, 00:17:09.597 "rdma_max_cq_size": 0, 00:17:09.597 "rdma_cm_event_timeout_ms": 0, 00:17:09.597 "dhchap_digests": [ 00:17:09.597 "sha256", 00:17:09.597 "sha384", 00:17:09.597 "sha512" 00:17:09.597 ], 00:17:09.597 "dhchap_dhgroups": [ 00:17:09.597 "null", 00:17:09.597 "ffdhe2048", 00:17:09.597 13:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 "ffdhe3072", 00:17:09.597 "ffdhe4096", 00:17:09.597 "ffdhe6144", 00:17:09.597 "ffdhe8192" 00:17:09.597 ] 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_nvme_attach_controller", 00:17:09.597 "params": { 00:17:09.597 "name": "TLSTEST", 00:17:09.597 "trtype": "TCP", 00:17:09.597 "adrfam": "IPv4", 00:17:09.597 "traddr": "10.0.0.2", 00:17:09.597 "trsvcid": "4420", 00:17:09.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.597 "prchk_reftag": false, 00:17:09.597 "prchk_guard": false, 00:17:09.597 "ctrlr_loss_timeout_sec": 0, 00:17:09.597 "reconnect_delay_sec": 0, 00:17:09.597 "fast_io_fail_timeout_sec": 0, 00:17:09.597 "psk": "/tmp/tmp.k7xfqTV6me", 00:17:09.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:09.597 "hdgst": false, 00:17:09.597 "ddgst": false 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_nvme_set_hotplug", 00:17:09.597 "params": { 00:17:09.597 "period_us": 100000, 00:17:09.597 "enable": false 00:17:09.597 } 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "method": "bdev_wait_for_examine" 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 }, 00:17:09.597 { 00:17:09.597 "subsystem": "nbd", 00:17:09.597 "config": [] 00:17:09.597 } 00:17:09.597 ] 00:17:09.597 }' 00:17:09.597 [2024-07-25 13:46:06.623311] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:09.597 [2024-07-25 13:46:06.623401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid585139 ] 00:17:09.857 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.857 [2024-07-25 13:46:06.689085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.857 [2024-07-25 13:46:06.800677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.117 [2024-07-25 13:46:06.971173] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.117 [2024-07-25 13:46:06.971290] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:10.684 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.684 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:10.684 13:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:10.684 Running I/O for 10 seconds... 00:17:22.925 00:17:22.925 Latency(us) 00:17:22.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.925 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:22.925 Verification LBA range: start 0x0 length 0x2000 00:17:22.925 TLSTESTn1 : 10.02 3450.44 13.48 0.00 0.00 37036.91 6262.33 39224.51 00:17:22.925 =================================================================================================================== 00:17:22.925 Total : 3450.44 13.48 0.00 0.00 37036.91 6262.33 39224.51 00:17:22.925 0 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 585139 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 585139 ']' 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 585139 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 585139 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 585139' 00:17:22.925 killing process with pid 585139 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 585139 00:17:22.925 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.925 00:17:22.925 Latency(us) 00:17:22.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.925 =================================================================================================================== 00:17:22.925 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.925 [2024-07-25 13:46:17.800068] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:22.925 13:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 585139 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 584993 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 584993 ']' 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 584993 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 584993 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 584993' 00:17:22.925 killing process with pid 584993 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 584993 00:17:22.925 [2024-07-25 13:46:18.091398] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 584993 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=586471 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 586471 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 586471 ']' 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.925 [2024-07-25 13:46:18.411776] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:22.925 [2024-07-25 13:46:18.411869] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.925 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.925 [2024-07-25 13:46:18.473682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.925 [2024-07-25 13:46:18.571235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.925 [2024-07-25 13:46:18.571292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.925 [2024-07-25 13:46:18.571315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.925 [2024-07-25 13:46:18.571326] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.925 [2024-07-25 13:46:18.571336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.925 [2024-07-25 13:46:18.571362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.k7xfqTV6me 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.k7xfqTV6me 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:22.925 [2024-07-25 13:46:18.920993] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.925 13:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:22.925 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:22.925 [2024-07-25 13:46:19.410365] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:22.925 [2024-07-25 13:46:19.410594] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.925 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:22.925 malloc0 00:17:22.925 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:22.925 13:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.k7xfqTV6me 00:17:23.184 [2024-07-25 13:46:20.147710] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=586736 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 586736 /var/tmp/bdevperf.sock 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 586736 ']' 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:23.184 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.184 [2024-07-25 13:46:20.212624] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:23.184 [2024-07-25 13:46:20.212710] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid586736 ] 00:17:23.443 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.443 [2024-07-25 13:46:20.272543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.443 [2024-07-25 13:46:20.380055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.702 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:23.702 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:23.702 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.k7xfqTV6me 00:17:23.702 13:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:23.962 [2024-07-25 13:46:20.954720] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:24.221 nvme0n1 00:17:24.221 13:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:24.221 Running I/O for 1 seconds... 00:17:25.159 00:17:25.159 Latency(us) 00:17:25.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.159 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:25.159 Verification LBA range: start 0x0 length 0x2000 00:17:25.159 nvme0n1 : 1.02 3424.78 13.38 0.00 0.00 37040.93 6262.33 29709.65 00:17:25.159 =================================================================================================================== 00:17:25.159 Total : 3424.78 13.38 0.00 0.00 37040.93 6262.33 29709.65 00:17:25.159 0 00:17:25.159 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 586736 00:17:25.159 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 586736 ']' 00:17:25.159 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 586736 00:17:25.159 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:25.159 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.159 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 586736 00:17:25.417 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:25.417 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:25.417 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 586736' 00:17:25.417 killing process with pid 586736 00:17:25.417 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 586736 00:17:25.417 Received shutdown signal, test time was about 1.000000 seconds 00:17:25.417 00:17:25.417 Latency(us) 00:17:25.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.417 =================================================================================================================== 00:17:25.417 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.417 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 586736 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 586471 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 586471 ']' 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 586471 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 586471 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 586471' 00:17:25.677 killing process with pid 586471 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 586471 00:17:25.677 [2024-07-25 13:46:22.485274] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:25.677 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 586471 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=587037 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 587037 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 587037 ']' 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.938 13:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.938 [2024-07-25 13:46:22.809112] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:25.938 [2024-07-25 13:46:22.809187] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.938 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.938 [2024-07-25 13:46:22.872620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.196 [2024-07-25 13:46:22.981444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.196 [2024-07-25 13:46:22.981507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.196 [2024-07-25 13:46:22.981520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.196 [2024-07-25 13:46:22.981530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.196 [2024-07-25 13:46:22.981539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.196 [2024-07-25 13:46:22.981575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.196 [2024-07-25 13:46:23.122892] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.196 malloc0 00:17:26.196 [2024-07-25 13:46:23.154775] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.196 [2024-07-25 13:46:23.162278] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=587059 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 587059 /var/tmp/bdevperf.sock 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 587059 ']' 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.196 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.196 [2024-07-25 13:46:23.225680] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:26.196 [2024-07-25 13:46:23.225740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587059 ] 00:17:26.465 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.465 [2024-07-25 13:46:23.282427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.465 [2024-07-25 13:46:23.386928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.465 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.465 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:26.465 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.k7xfqTV6me 00:17:26.723 13:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:26.980 [2024-07-25 13:46:23.983492] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:27.239 nvme0n1 00:17:27.239 13:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:27.239 Running I/O for 1 seconds... 00:17:28.178 00:17:28.178 Latency(us) 00:17:28.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.178 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:28.178 Verification LBA range: start 0x0 length 0x2000 00:17:28.178 nvme0n1 : 1.02 3518.66 13.74 0.00 0.00 36023.07 6213.78 29903.83 00:17:28.178 =================================================================================================================== 00:17:28.178 Total : 3518.66 13.74 0.00 0.00 36023.07 6213.78 29903.83 00:17:28.178 0 00:17:28.178 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:17:28.178 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.178 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.437 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.437 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:17:28.437 "subsystems": [ 00:17:28.437 { 00:17:28.437 "subsystem": "keyring", 00:17:28.437 "config": [ 00:17:28.437 { 00:17:28.437 "method": "keyring_file_add_key", 00:17:28.437 "params": { 00:17:28.437 "name": "key0", 00:17:28.437 "path": "/tmp/tmp.k7xfqTV6me" 00:17:28.437 } 00:17:28.437 } 00:17:28.437 ] 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "subsystem": "iobuf", 00:17:28.437 "config": [ 00:17:28.437 { 00:17:28.437 "method": "iobuf_set_options", 00:17:28.437 "params": { 00:17:28.437 "small_pool_count": 8192, 00:17:28.437 "large_pool_count": 1024, 00:17:28.437 "small_bufsize": 8192, 00:17:28.437 "large_bufsize": 135168 00:17:28.437 } 00:17:28.437 } 00:17:28.437 ] 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "subsystem": "sock", 00:17:28.437 "config": [ 00:17:28.437 { 00:17:28.437 "method": "sock_set_default_impl", 00:17:28.437 "params": { 00:17:28.437 "impl_name": "posix" 00:17:28.437 } 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "method": "sock_impl_set_options", 00:17:28.437 "params": { 00:17:28.437 "impl_name": "ssl", 00:17:28.437 "recv_buf_size": 4096, 00:17:28.437 "send_buf_size": 4096, 00:17:28.437 "enable_recv_pipe": true, 00:17:28.437 "enable_quickack": false, 00:17:28.437 "enable_placement_id": 0, 00:17:28.437 "enable_zerocopy_send_server": true, 00:17:28.437 "enable_zerocopy_send_client": false, 00:17:28.437 "zerocopy_threshold": 0, 00:17:28.437 "tls_version": 0, 00:17:28.437 "enable_ktls": false 00:17:28.437 } 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "method": "sock_impl_set_options", 00:17:28.437 "params": { 00:17:28.437 "impl_name": "posix", 00:17:28.437 "recv_buf_size": 2097152, 00:17:28.437 "send_buf_size": 2097152, 00:17:28.437 "enable_recv_pipe": true, 00:17:28.437 "enable_quickack": false, 00:17:28.437 "enable_placement_id": 0, 00:17:28.437 "enable_zerocopy_send_server": true, 00:17:28.437 "enable_zerocopy_send_client": false, 00:17:28.437 "zerocopy_threshold": 0, 00:17:28.437 "tls_version": 0, 00:17:28.437 "enable_ktls": false 00:17:28.437 } 00:17:28.437 } 00:17:28.437 ] 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "subsystem": "vmd", 00:17:28.437 "config": [] 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "subsystem": "accel", 00:17:28.437 "config": [ 00:17:28.437 { 00:17:28.437 "method": "accel_set_options", 00:17:28.437 "params": { 00:17:28.437 "small_cache_size": 128, 00:17:28.437 "large_cache_size": 16, 00:17:28.437 "task_count": 2048, 00:17:28.437 "sequence_count": 2048, 00:17:28.437 "buf_count": 2048 00:17:28.437 } 00:17:28.437 } 00:17:28.437 ] 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "subsystem": "bdev", 00:17:28.437 "config": [ 00:17:28.437 { 00:17:28.437 "method": "bdev_set_options", 00:17:28.437 "params": { 00:17:28.437 "bdev_io_pool_size": 65535, 00:17:28.437 "bdev_io_cache_size": 256, 00:17:28.437 "bdev_auto_examine": true, 00:17:28.437 "iobuf_small_cache_size": 128, 00:17:28.437 "iobuf_large_cache_size": 16 00:17:28.437 } 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "method": "bdev_raid_set_options", 00:17:28.437 "params": { 00:17:28.437 "process_window_size_kb": 1024, 00:17:28.437 "process_max_bandwidth_mb_sec": 0 00:17:28.437 } 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "method": "bdev_iscsi_set_options", 00:17:28.437 "params": { 00:17:28.437 "timeout_sec": 30 00:17:28.437 } 00:17:28.437 }, 00:17:28.437 { 00:17:28.437 "method": "bdev_nvme_set_options", 00:17:28.437 "params": { 00:17:28.437 "action_on_timeout": "none", 00:17:28.437 "timeout_us": 0, 00:17:28.437 "timeout_admin_us": 0, 00:17:28.437 "keep_alive_timeout_ms": 10000, 00:17:28.437 "arbitration_burst": 0, 00:17:28.437 "low_priority_weight": 0, 00:17:28.437 "medium_priority_weight": 0, 00:17:28.437 "high_priority_weight": 0, 00:17:28.437 "nvme_adminq_poll_period_us": 10000, 00:17:28.437 "nvme_ioq_poll_period_us": 0, 00:17:28.437 "io_queue_requests": 0, 00:17:28.437 "delay_cmd_submit": true, 00:17:28.437 "transport_retry_count": 4, 00:17:28.437 "bdev_retry_count": 3, 00:17:28.437 "transport_ack_timeout": 0, 00:17:28.437 "ctrlr_loss_timeout_sec": 0, 00:17:28.437 "reconnect_delay_sec": 0, 00:17:28.437 "fast_io_fail_timeout_sec": 0, 00:17:28.437 "disable_auto_failback": false, 00:17:28.437 "generate_uuids": false, 00:17:28.437 "transport_tos": 0, 00:17:28.437 "nvme_error_stat": false, 00:17:28.437 "rdma_srq_size": 0, 00:17:28.437 "io_path_stat": false, 00:17:28.437 "allow_accel_sequence": false, 00:17:28.437 "rdma_max_cq_size": 0, 00:17:28.437 "rdma_cm_event_timeout_ms": 0, 00:17:28.437 "dhchap_digests": [ 00:17:28.437 "sha256", 00:17:28.437 "sha384", 00:17:28.437 "sha512" 00:17:28.437 ], 00:17:28.437 "dhchap_dhgroups": [ 00:17:28.437 "null", 00:17:28.437 "ffdhe2048", 00:17:28.438 "ffdhe3072", 00:17:28.438 "ffdhe4096", 00:17:28.438 "ffdhe6144", 00:17:28.438 "ffdhe8192" 00:17:28.438 ] 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "bdev_nvme_set_hotplug", 00:17:28.438 "params": { 00:17:28.438 "period_us": 100000, 00:17:28.438 "enable": false 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "bdev_malloc_create", 00:17:28.438 "params": { 00:17:28.438 "name": "malloc0", 00:17:28.438 "num_blocks": 8192, 00:17:28.438 "block_size": 4096, 00:17:28.438 "physical_block_size": 4096, 00:17:28.438 "uuid": "883b1bf8-ef01-4853-8552-00fde764639e", 00:17:28.438 "optimal_io_boundary": 0, 00:17:28.438 "md_size": 0, 00:17:28.438 "dif_type": 0, 00:17:28.438 "dif_is_head_of_md": false, 00:17:28.438 "dif_pi_format": 0 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "bdev_wait_for_examine" 00:17:28.438 } 00:17:28.438 ] 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "subsystem": "nbd", 00:17:28.438 "config": [] 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "subsystem": "scheduler", 00:17:28.438 "config": [ 00:17:28.438 { 00:17:28.438 "method": "framework_set_scheduler", 00:17:28.438 "params": { 00:17:28.438 "name": "static" 00:17:28.438 } 00:17:28.438 } 00:17:28.438 ] 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "subsystem": "nvmf", 00:17:28.438 "config": [ 00:17:28.438 { 00:17:28.438 "method": "nvmf_set_config", 00:17:28.438 "params": { 00:17:28.438 "discovery_filter": "match_any", 00:17:28.438 "admin_cmd_passthru": { 00:17:28.438 "identify_ctrlr": false 00:17:28.438 } 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "nvmf_set_max_subsystems", 00:17:28.438 "params": { 00:17:28.438 "max_subsystems": 1024 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "nvmf_set_crdt", 00:17:28.438 "params": { 00:17:28.438 "crdt1": 0, 00:17:28.438 "crdt2": 0, 00:17:28.438 "crdt3": 0 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "nvmf_create_transport", 00:17:28.438 "params": { 00:17:28.438 "trtype": "TCP", 00:17:28.438 "max_queue_depth": 128, 00:17:28.438 "max_io_qpairs_per_ctrlr": 127, 00:17:28.438 "in_capsule_data_size": 4096, 00:17:28.438 "max_io_size": 131072, 00:17:28.438 "io_unit_size": 131072, 00:17:28.438 "max_aq_depth": 128, 00:17:28.438 "num_shared_buffers": 511, 00:17:28.438 "buf_cache_size": 4294967295, 00:17:28.438 "dif_insert_or_strip": false, 00:17:28.438 "zcopy": false, 00:17:28.438 "c2h_success": false, 00:17:28.438 "sock_priority": 0, 00:17:28.438 "abort_timeout_sec": 1, 00:17:28.438 "ack_timeout": 0, 00:17:28.438 "data_wr_pool_size": 0 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "nvmf_create_subsystem", 00:17:28.438 "params": { 00:17:28.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.438 "allow_any_host": false, 00:17:28.438 "serial_number": "00000000000000000000", 00:17:28.438 "model_number": "SPDK bdev Controller", 00:17:28.438 "max_namespaces": 32, 00:17:28.438 "min_cntlid": 1, 00:17:28.438 "max_cntlid": 65519, 00:17:28.438 "ana_reporting": false 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "nvmf_subsystem_add_host", 00:17:28.438 "params": { 00:17:28.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.438 "host": "nqn.2016-06.io.spdk:host1", 00:17:28.438 "psk": "key0" 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "nvmf_subsystem_add_ns", 00:17:28.438 "params": { 00:17:28.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.438 "namespace": { 00:17:28.438 "nsid": 1, 00:17:28.438 "bdev_name": "malloc0", 00:17:28.438 "nguid": "883B1BF8EF014853855200FDE764639E", 00:17:28.438 "uuid": "883b1bf8-ef01-4853-8552-00fde764639e", 00:17:28.438 "no_auto_visible": false 00:17:28.438 } 00:17:28.438 } 00:17:28.438 }, 00:17:28.438 { 00:17:28.438 "method": "nvmf_subsystem_add_listener", 00:17:28.438 "params": { 00:17:28.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.438 "listen_address": { 00:17:28.438 "trtype": "TCP", 00:17:28.438 "adrfam": "IPv4", 00:17:28.438 "traddr": "10.0.0.2", 00:17:28.438 "trsvcid": "4420" 00:17:28.438 }, 00:17:28.438 "secure_channel": false, 00:17:28.438 "sock_impl": "ssl" 00:17:28.438 } 00:17:28.438 } 00:17:28.438 ] 00:17:28.438 } 00:17:28.438 ] 00:17:28.438 }' 00:17:28.438 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:28.698 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:17:28.698 "subsystems": [ 00:17:28.698 { 00:17:28.698 "subsystem": "keyring", 00:17:28.698 "config": [ 00:17:28.698 { 00:17:28.698 "method": "keyring_file_add_key", 00:17:28.698 "params": { 00:17:28.698 "name": "key0", 00:17:28.698 "path": "/tmp/tmp.k7xfqTV6me" 00:17:28.698 } 00:17:28.698 } 00:17:28.698 ] 00:17:28.698 }, 00:17:28.698 { 00:17:28.698 "subsystem": "iobuf", 00:17:28.698 "config": [ 00:17:28.698 { 00:17:28.698 "method": "iobuf_set_options", 00:17:28.698 "params": { 00:17:28.698 "small_pool_count": 8192, 00:17:28.698 "large_pool_count": 1024, 00:17:28.698 "small_bufsize": 8192, 00:17:28.698 "large_bufsize": 135168 00:17:28.698 } 00:17:28.698 } 00:17:28.698 ] 00:17:28.698 }, 00:17:28.698 { 00:17:28.698 "subsystem": "sock", 00:17:28.698 "config": [ 00:17:28.698 { 00:17:28.698 "method": "sock_set_default_impl", 00:17:28.698 "params": { 00:17:28.698 "impl_name": "posix" 00:17:28.698 } 00:17:28.698 }, 00:17:28.698 { 00:17:28.698 "method": "sock_impl_set_options", 00:17:28.698 "params": { 00:17:28.698 "impl_name": "ssl", 00:17:28.698 "recv_buf_size": 4096, 00:17:28.698 "send_buf_size": 4096, 00:17:28.698 "enable_recv_pipe": true, 00:17:28.698 "enable_quickack": false, 00:17:28.698 "enable_placement_id": 0, 00:17:28.698 "enable_zerocopy_send_server": true, 00:17:28.698 "enable_zerocopy_send_client": false, 00:17:28.698 "zerocopy_threshold": 0, 00:17:28.699 "tls_version": 0, 00:17:28.699 "enable_ktls": false 00:17:28.699 } 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "method": "sock_impl_set_options", 00:17:28.699 "params": { 00:17:28.699 "impl_name": "posix", 00:17:28.699 "recv_buf_size": 2097152, 00:17:28.699 "send_buf_size": 2097152, 00:17:28.699 "enable_recv_pipe": true, 00:17:28.699 "enable_quickack": false, 00:17:28.699 "enable_placement_id": 0, 00:17:28.699 "enable_zerocopy_send_server": true, 00:17:28.699 "enable_zerocopy_send_client": false, 00:17:28.699 "zerocopy_threshold": 0, 00:17:28.699 "tls_version": 0, 00:17:28.699 "enable_ktls": false 00:17:28.699 } 00:17:28.699 } 00:17:28.699 ] 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "subsystem": "vmd", 00:17:28.699 "config": [] 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "subsystem": "accel", 00:17:28.699 "config": [ 00:17:28.699 { 00:17:28.699 "method": "accel_set_options", 00:17:28.699 "params": { 00:17:28.699 "small_cache_size": 128, 00:17:28.699 "large_cache_size": 16, 00:17:28.699 "task_count": 2048, 00:17:28.699 "sequence_count": 2048, 00:17:28.699 "buf_count": 2048 00:17:28.699 } 00:17:28.699 } 00:17:28.699 ] 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "subsystem": "bdev", 00:17:28.699 "config": [ 00:17:28.699 { 00:17:28.699 "method": "bdev_set_options", 00:17:28.699 "params": { 00:17:28.699 "bdev_io_pool_size": 65535, 00:17:28.699 "bdev_io_cache_size": 256, 00:17:28.699 "bdev_auto_examine": true, 00:17:28.699 "iobuf_small_cache_size": 128, 00:17:28.699 "iobuf_large_cache_size": 16 00:17:28.699 } 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "method": "bdev_raid_set_options", 00:17:28.699 "params": { 00:17:28.699 "process_window_size_kb": 1024, 00:17:28.699 "process_max_bandwidth_mb_sec": 0 00:17:28.699 } 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "method": "bdev_iscsi_set_options", 00:17:28.699 "params": { 00:17:28.699 "timeout_sec": 30 00:17:28.699 } 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "method": "bdev_nvme_set_options", 00:17:28.699 "params": { 00:17:28.699 "action_on_timeout": "none", 00:17:28.699 "timeout_us": 0, 00:17:28.699 "timeout_admin_us": 0, 00:17:28.699 "keep_alive_timeout_ms": 10000, 00:17:28.699 "arbitration_burst": 0, 00:17:28.699 "low_priority_weight": 0, 00:17:28.699 "medium_priority_weight": 0, 00:17:28.699 "high_priority_weight": 0, 00:17:28.699 "nvme_adminq_poll_period_us": 10000, 00:17:28.699 "nvme_ioq_poll_period_us": 0, 00:17:28.699 "io_queue_requests": 512, 00:17:28.699 "delay_cmd_submit": true, 00:17:28.699 "transport_retry_count": 4, 00:17:28.699 "bdev_retry_count": 3, 00:17:28.699 "transport_ack_timeout": 0, 00:17:28.699 "ctrlr_loss_timeout_sec": 0, 00:17:28.699 "reconnect_delay_sec": 0, 00:17:28.699 "fast_io_fail_timeout_sec": 0, 00:17:28.699 "disable_auto_failback": false, 00:17:28.699 "generate_uuids": false, 00:17:28.699 "transport_tos": 0, 00:17:28.699 "nvme_error_stat": false, 00:17:28.699 "rdma_srq_size": 0, 00:17:28.699 "io_path_stat": false, 00:17:28.699 "allow_accel_sequence": false, 00:17:28.699 "rdma_max_cq_size": 0, 00:17:28.699 "rdma_cm_event_timeout_ms": 0, 00:17:28.699 "dhchap_digests": [ 00:17:28.699 "sha256", 00:17:28.699 "sha384", 00:17:28.699 "sha512" 00:17:28.699 ], 00:17:28.699 "dhchap_dhgroups": [ 00:17:28.699 "null", 00:17:28.699 "ffdhe2048", 00:17:28.699 "ffdhe3072", 00:17:28.699 "ffdhe4096", 00:17:28.699 "ffdhe6144", 00:17:28.699 "ffdhe8192" 00:17:28.699 ] 00:17:28.699 } 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "method": "bdev_nvme_attach_controller", 00:17:28.699 "params": { 00:17:28.699 "name": "nvme0", 00:17:28.699 "trtype": "TCP", 00:17:28.699 "adrfam": "IPv4", 00:17:28.699 "traddr": "10.0.0.2", 00:17:28.699 "trsvcid": "4420", 00:17:28.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.699 "prchk_reftag": false, 00:17:28.699 "prchk_guard": false, 00:17:28.699 "ctrlr_loss_timeout_sec": 0, 00:17:28.699 "reconnect_delay_sec": 0, 00:17:28.699 "fast_io_fail_timeout_sec": 0, 00:17:28.699 "psk": "key0", 00:17:28.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.699 "hdgst": false, 00:17:28.699 "ddgst": false 00:17:28.699 } 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "method": "bdev_nvme_set_hotplug", 00:17:28.699 "params": { 00:17:28.699 "period_us": 100000, 00:17:28.699 "enable": false 00:17:28.699 } 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "method": "bdev_enable_histogram", 00:17:28.699 "params": { 00:17:28.699 "name": "nvme0n1", 00:17:28.699 "enable": true 00:17:28.699 } 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "method": "bdev_wait_for_examine" 00:17:28.699 } 00:17:28.699 ] 00:17:28.699 }, 00:17:28.699 { 00:17:28.699 "subsystem": "nbd", 00:17:28.699 "config": [] 00:17:28.699 } 00:17:28.699 ] 00:17:28.699 }' 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 587059 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 587059 ']' 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 587059 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 587059 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 587059' 00:17:28.699 killing process with pid 587059 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 587059 00:17:28.699 Received shutdown signal, test time was about 1.000000 seconds 00:17:28.699 00:17:28.699 Latency(us) 00:17:28.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.699 =================================================================================================================== 00:17:28.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.699 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 587059 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 587037 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 587037 ']' 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 587037 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 587037 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 587037' 00:17:28.960 killing process with pid 587037 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 587037 00:17:28.960 13:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 587037 00:17:29.219 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:17:29.219 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.219 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:17:29.219 "subsystems": [ 00:17:29.219 { 00:17:29.219 "subsystem": "keyring", 00:17:29.219 "config": [ 00:17:29.219 { 00:17:29.219 "method": "keyring_file_add_key", 00:17:29.219 "params": { 00:17:29.219 "name": "key0", 00:17:29.219 "path": "/tmp/tmp.k7xfqTV6me" 00:17:29.219 } 00:17:29.219 } 00:17:29.219 ] 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "subsystem": "iobuf", 00:17:29.219 "config": [ 00:17:29.219 { 00:17:29.219 "method": "iobuf_set_options", 00:17:29.219 "params": { 00:17:29.219 "small_pool_count": 8192, 00:17:29.219 "large_pool_count": 1024, 00:17:29.219 "small_bufsize": 8192, 00:17:29.219 "large_bufsize": 135168 00:17:29.219 } 00:17:29.219 } 00:17:29.219 ] 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "subsystem": "sock", 00:17:29.219 "config": [ 00:17:29.219 { 00:17:29.219 "method": "sock_set_default_impl", 00:17:29.219 "params": { 00:17:29.219 "impl_name": "posix" 00:17:29.219 } 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "method": "sock_impl_set_options", 00:17:29.219 "params": { 00:17:29.219 "impl_name": "ssl", 00:17:29.219 "recv_buf_size": 4096, 00:17:29.219 "send_buf_size": 4096, 00:17:29.219 "enable_recv_pipe": true, 00:17:29.219 "enable_quickack": false, 00:17:29.219 "enable_placement_id": 0, 00:17:29.219 "enable_zerocopy_send_server": true, 00:17:29.219 "enable_zerocopy_send_client": false, 00:17:29.219 "zerocopy_threshold": 0, 00:17:29.219 "tls_version": 0, 00:17:29.219 "enable_ktls": false 00:17:29.219 } 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "method": "sock_impl_set_options", 00:17:29.219 "params": { 00:17:29.219 "impl_name": "posix", 00:17:29.219 "recv_buf_size": 2097152, 00:17:29.219 "send_buf_size": 2097152, 00:17:29.219 "enable_recv_pipe": true, 00:17:29.219 "enable_quickack": false, 00:17:29.219 "enable_placement_id": 0, 00:17:29.219 "enable_zerocopy_send_server": true, 00:17:29.219 "enable_zerocopy_send_client": false, 00:17:29.219 "zerocopy_threshold": 0, 00:17:29.219 "tls_version": 0, 00:17:29.219 "enable_ktls": false 00:17:29.219 } 00:17:29.219 } 00:17:29.219 ] 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "subsystem": "vmd", 00:17:29.219 "config": [] 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "subsystem": "accel", 00:17:29.219 "config": [ 00:17:29.219 { 00:17:29.219 "method": "accel_set_options", 00:17:29.219 "params": { 00:17:29.219 "small_cache_size": 128, 00:17:29.219 "large_cache_size": 16, 00:17:29.219 "task_count": 2048, 00:17:29.219 "sequence_count": 2048, 00:17:29.219 "buf_count": 2048 00:17:29.219 } 00:17:29.219 } 00:17:29.219 ] 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "subsystem": "bdev", 00:17:29.219 "config": [ 00:17:29.219 { 00:17:29.219 "method": "bdev_set_options", 00:17:29.219 "params": { 00:17:29.219 "bdev_io_pool_size": 65535, 00:17:29.219 "bdev_io_cache_size": 256, 00:17:29.219 "bdev_auto_examine": true, 00:17:29.219 "iobuf_small_cache_size": 128, 00:17:29.219 "iobuf_large_cache_size": 16 00:17:29.219 } 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "method": "bdev_raid_set_options", 00:17:29.219 "params": { 00:17:29.219 "process_window_size_kb": 1024, 00:17:29.219 "process_max_bandwidth_mb_sec": 0 00:17:29.219 } 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "method": "bdev_iscsi_set_options", 00:17:29.219 "params": { 00:17:29.219 "timeout_sec": 30 00:17:29.219 } 00:17:29.219 }, 00:17:29.219 { 00:17:29.219 "method": "bdev_nvme_set_options", 00:17:29.219 "params": { 00:17:29.219 "action_on_timeout": "none", 00:17:29.219 "timeout_us": 0, 00:17:29.219 "timeout_admin_us": 0, 00:17:29.219 "keep_alive_timeout_ms": 10000, 00:17:29.219 "arbitration_burst": 0, 00:17:29.219 "low_priority_weight": 0, 00:17:29.219 "medium_priority_weight": 0, 00:17:29.220 "high_priority_weight": 0, 00:17:29.220 "nvme_adminq_poll_period_us": 10000, 00:17:29.220 "nvme_ioq_poll_period_us": 0, 00:17:29.220 "io_queue_requests": 0, 00:17:29.220 "delay_cmd_submit": true, 00:17:29.220 "transport_retry_count": 4, 00:17:29.220 "bdev_retry_count": 3, 00:17:29.220 "transport_ack_timeout": 0, 00:17:29.220 "ctrlr_loss_timeout_sec": 0, 00:17:29.220 "reconnect_delay_sec": 0, 00:17:29.220 "fast_io_fail_timeout_sec": 0, 00:17:29.220 "disable_auto_failback": false, 00:17:29.220 "generate_uuids": false, 00:17:29.220 "transport_tos": 0, 00:17:29.220 "nvme_error_stat": false, 00:17:29.220 "rdma_srq_size": 0, 00:17:29.220 "io_path_stat": false, 00:17:29.220 "allow_accel_sequence": false, 00:17:29.220 "rdma_max_cq_size": 0, 00:17:29.220 "rdma_cm_event_timeout_ms": 0, 00:17:29.220 "dhchap_digests": [ 00:17:29.220 "sha256", 00:17:29.220 "sha384", 00:17:29.220 "sha512" 00:17:29.220 ], 00:17:29.220 "dhchap_dhgroups": [ 00:17:29.220 "null", 00:17:29.220 "ffdhe2048", 00:17:29.220 "ffdhe3072", 00:17:29.220 "ffdhe4096", 00:17:29.220 "ffdhe6144", 00:17:29.220 "ffdhe8192" 00:17:29.220 ] 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "bdev_nvme_set_hotplug", 00:17:29.220 "params": { 00:17:29.220 "period_us": 100000, 00:17:29.220 "enable": false 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "bdev_malloc_create", 00:17:29.220 "params": { 00:17:29.220 "name": "malloc0", 00:17:29.220 "num_blocks": 8192, 00:17:29.220 "block_size": 4096, 00:17:29.220 "physical_block_size": 4096, 00:17:29.220 "uuid": "883b1bf8-ef01-4853-8552-00fde764639e", 00:17:29.220 "optimal_io_boundary": 0, 00:17:29.220 "md_size": 0, 00:17:29.220 "dif_type": 0, 00:17:29.220 "dif_is_head_of_md": false, 00:17:29.220 "dif_pi_format": 0 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "bdev_wait_for_examine" 00:17:29.220 } 00:17:29.220 ] 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "subsystem": "nbd", 00:17:29.220 "config": [] 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "subsystem": "scheduler", 00:17:29.220 "config": [ 00:17:29.220 { 00:17:29.220 "method": "framework_set_scheduler", 00:17:29.220 "params": { 00:17:29.220 "name": "static" 00:17:29.220 } 00:17:29.220 } 00:17:29.220 ] 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "subsystem": "nvmf", 00:17:29.220 "config": [ 00:17:29.220 { 00:17:29.220 "method": "nvmf_set_config", 00:17:29.220 "params": { 00:17:29.220 "discovery_filter": "match_any", 00:17:29.220 "admin_cmd_passthru": { 00:17:29.220 "identify_ctrlr": false 00:17:29.220 } 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "nvmf_set_max_subsystems", 00:17:29.220 "params": { 00:17:29.220 "max_subsystems": 1024 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "nvmf_set_crdt", 00:17:29.220 "params": { 00:17:29.220 "crdt1": 0, 00:17:29.220 "crdt2": 0, 00:17:29.220 "crdt3": 0 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "nvmf_create_transport", 00:17:29.220 "params": { 00:17:29.220 "trtype": "TCP", 00:17:29.220 "max_queue_depth": 128, 00:17:29.220 "max_io_qpairs_per_ctrlr": 127, 00:17:29.220 "in_capsule_data_size": 4096, 00:17:29.220 "max_io_size": 131072, 00:17:29.220 "io_unit_size": 131072, 00:17:29.220 "max_aq_depth": 128, 00:17:29.220 "num_shared_buffers": 511, 00:17:29.220 "buf_cache_size": 4294967295, 00:17:29.220 "dif_insert_or_strip": false, 00:17:29.220 "zcopy": false, 00:17:29.220 "c2h_success": false, 00:17:29.220 "sock_priority": 0, 00:17:29.220 "abort_timeout_sec": 1, 00:17:29.220 "ack_timeout": 0, 00:17:29.220 "data_wr_pool_size": 0 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "nvmf_create_subsystem", 00:17:29.220 "params": { 00:17:29.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.220 "allow_any_host": false, 00:17:29.220 "serial_number": "00000000000000000000", 00:17:29.220 "model_number": "SPDK bdev Controller", 00:17:29.220 "max_namespaces": 32, 00:17:29.220 "min_cntlid": 1, 00:17:29.220 "max_cntlid": 65519, 00:17:29.220 "ana_reporting": false 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "nvmf_subsystem_add_host", 00:17:29.220 "params": { 00:17:29.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.220 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.220 "psk": "key0" 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "nvmf_subsystem_add_ns", 00:17:29.220 "params": { 00:17:29.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.220 "namespace": { 00:17:29.220 "nsid": 1, 00:17:29.220 "bdev_name": "malloc0", 00:17:29.220 "nguid": "883B1BF8EF014853855200FDE764639E", 00:17:29.220 "uuid": "883b1bf8-ef01-4853-8552-00fde764639e", 00:17:29.220 "no_auto_visible": false 00:17:29.220 } 00:17:29.220 } 00:17:29.220 }, 00:17:29.220 { 00:17:29.220 "method": "nvmf_subsystem_add_listener", 00:17:29.220 "params": { 00:17:29.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.220 "listen_address": { 00:17:29.220 "trtype": "TCP", 00:17:29.220 "adrfam": "IPv4", 00:17:29.220 "traddr": "10.0.0.2", 00:17:29.220 "trsvcid": "4420" 00:17:29.220 }, 00:17:29.220 "secure_channel": false, 00:17:29.220 "sock_impl": "ssl" 00:17:29.220 } 00:17:29.220 } 00:17:29.220 ] 00:17:29.220 } 00:17:29.220 ] 00:17:29.220 }' 00:17:29.220 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:29.220 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.480 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=587468 00:17:29.480 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:29.480 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 587468 00:17:29.480 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 587468 ']' 00:17:29.480 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.480 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.480 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.480 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.480 13:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.480 [2024-07-25 13:46:26.304460] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:29.480 [2024-07-25 13:46:26.304548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.480 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.480 [2024-07-25 13:46:26.368688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.480 [2024-07-25 13:46:26.477895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.480 [2024-07-25 13:46:26.477973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.480 [2024-07-25 13:46:26.477986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.480 [2024-07-25 13:46:26.477997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.480 [2024-07-25 13:46:26.478006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.480 [2024-07-25 13:46:26.478088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.740 [2024-07-25 13:46:26.715577] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.740 [2024-07-25 13:46:26.753518] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:29.740 [2024-07-25 13:46:26.753758] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=587617 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 587617 /var/tmp/bdevperf.sock 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 587617 ']' 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.307 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:17:30.307 "subsystems": [ 00:17:30.307 { 00:17:30.307 "subsystem": "keyring", 00:17:30.307 "config": [ 00:17:30.307 { 00:17:30.307 "method": "keyring_file_add_key", 00:17:30.307 "params": { 00:17:30.307 "name": "key0", 00:17:30.307 "path": "/tmp/tmp.k7xfqTV6me" 00:17:30.307 } 00:17:30.307 } 00:17:30.307 ] 00:17:30.307 }, 00:17:30.307 { 00:17:30.307 "subsystem": "iobuf", 00:17:30.307 "config": [ 00:17:30.307 { 00:17:30.307 "method": "iobuf_set_options", 00:17:30.307 "params": { 00:17:30.307 "small_pool_count": 8192, 00:17:30.307 "large_pool_count": 1024, 00:17:30.307 "small_bufsize": 8192, 00:17:30.307 "large_bufsize": 135168 00:17:30.307 } 00:17:30.307 } 00:17:30.307 ] 00:17:30.307 }, 00:17:30.307 { 00:17:30.307 "subsystem": "sock", 00:17:30.307 "config": [ 00:17:30.307 { 00:17:30.307 "method": "sock_set_default_impl", 00:17:30.307 "params": { 00:17:30.307 "impl_name": "posix" 00:17:30.307 } 00:17:30.307 }, 00:17:30.307 { 00:17:30.307 "method": "sock_impl_set_options", 00:17:30.307 "params": { 00:17:30.307 "impl_name": "ssl", 00:17:30.307 "recv_buf_size": 4096, 00:17:30.307 "send_buf_size": 4096, 00:17:30.307 "enable_recv_pipe": true, 00:17:30.307 "enable_quickack": false, 00:17:30.307 "enable_placement_id": 0, 00:17:30.307 "enable_zerocopy_send_server": true, 00:17:30.307 "enable_zerocopy_send_client": false, 00:17:30.307 "zerocopy_threshold": 0, 00:17:30.307 "tls_version": 0, 00:17:30.307 "enable_ktls": false 00:17:30.307 } 00:17:30.307 }, 00:17:30.307 { 00:17:30.307 "method": "sock_impl_set_options", 00:17:30.307 "params": { 00:17:30.307 "impl_name": "posix", 00:17:30.307 "recv_buf_size": 2097152, 00:17:30.307 "send_buf_size": 2097152, 00:17:30.307 "enable_recv_pipe": true, 00:17:30.307 "enable_quickack": false, 00:17:30.307 "enable_placement_id": 0, 00:17:30.307 "enable_zerocopy_send_server": true, 00:17:30.307 "enable_zerocopy_send_client": false, 00:17:30.307 "zerocopy_threshold": 0, 00:17:30.307 "tls_version": 0, 00:17:30.307 "enable_ktls": false 00:17:30.307 } 00:17:30.307 } 00:17:30.307 ] 00:17:30.307 }, 00:17:30.307 { 00:17:30.307 "subsystem": "vmd", 00:17:30.307 "config": [] 00:17:30.307 }, 00:17:30.307 { 00:17:30.307 "subsystem": "accel", 00:17:30.307 "config": [ 00:17:30.307 { 00:17:30.307 "method": "accel_set_options", 00:17:30.307 "params": { 00:17:30.308 "small_cache_size": 128, 00:17:30.308 "large_cache_size": 16, 00:17:30.308 "task_count": 2048, 00:17:30.308 "sequence_count": 2048, 00:17:30.308 "buf_count": 2048 00:17:30.308 } 00:17:30.308 } 00:17:30.308 ] 00:17:30.308 }, 00:17:30.308 { 00:17:30.308 "subsystem": "bdev", 00:17:30.308 "config": [ 00:17:30.308 { 00:17:30.308 "method": "bdev_set_options", 00:17:30.308 "params": { 00:17:30.308 "bdev_io_pool_size": 65535, 00:17:30.308 "bdev_io_cache_size": 256, 00:17:30.308 "bdev_auto_examine": true, 00:17:30.308 "iobuf_small_cache_size": 128, 00:17:30.308 "iobuf_large_cache_size": 16 00:17:30.308 } 00:17:30.308 }, 00:17:30.308 { 00:17:30.308 "method": "bdev_raid_set_options", 00:17:30.308 "params": { 00:17:30.308 "process_window_size_kb": 1024, 00:17:30.308 "process_max_bandwidth_mb_sec": 0 00:17:30.308 } 00:17:30.308 }, 00:17:30.308 { 00:17:30.308 "method": "bdev_iscsi_set_options", 00:17:30.308 "params": { 00:17:30.308 "timeout_sec": 30 00:17:30.308 } 00:17:30.308 }, 00:17:30.308 { 00:17:30.308 "method": "bdev_nvme_set_options", 00:17:30.308 "params": { 00:17:30.308 "action_on_timeout": "none", 00:17:30.308 "timeout_us": 0, 00:17:30.308 "timeout_admin_us": 0, 00:17:30.308 "keep_alive_timeout_ms": 10000, 00:17:30.308 "arbitration_burst": 0, 00:17:30.308 "low_priority_weight": 0, 00:17:30.308 "medium_priority_weight": 0, 00:17:30.308 "high_priority_weight": 0, 00:17:30.308 "nvme_adminq_poll_period_us": 10000, 00:17:30.308 "nvme_ioq_poll_period_us": 0, 00:17:30.308 "io_queue_requests": 512, 00:17:30.308 "delay_cmd_submit": true, 00:17:30.308 "transport_retry_count": 4, 00:17:30.308 "bdev_retry_count": 3, 00:17:30.308 "transport_ack_timeout": 0, 00:17:30.308 "ctrlr_loss_timeout_sec": 0, 00:17:30.308 "reconnect_delay_sec": 0, 00:17:30.308 "fast_io_fail_timeout_sec": 0, 00:17:30.308 "disable_auto_failback": false, 00:17:30.308 "generate_uuids": false, 00:17:30.308 "transport_tos": 0, 00:17:30.308 "nvme_error_stat": false, 00:17:30.308 "rdma_srq_size": 0, 00:17:30.308 "io_path_stat": false, 00:17:30.308 "allow_accel_sequence": false, 00:17:30.308 "rdma_max_cq_size": 0, 00:17:30.308 "rdma_cm_event_timeout_ms": 0, 00:17:30.308 "dhchap_digests": [ 00:17:30.308 "sha256", 00:17:30.308 "sha384", 00:17:30.308 "sha512" 00:17:30.308 ], 00:17:30.308 "dhchap_dhgroups": [ 00:17:30.308 "null", 00:17:30.308 "ffdhe2048", 00:17:30.308 "ffdhe3072", 00:17:30.308 "ffdhe4096", 00:17:30.308 "ffdhe6144", 00:17:30.308 "ffdhe8192" 00:17:30.308 ] 00:17:30.308 } 00:17:30.308 }, 00:17:30.308 { 00:17:30.308 "method": "bdev_nvme_attach_controller", 00:17:30.308 "params": { 00:17:30.308 "name": "nvme0", 00:17:30.308 "trtype": "TCP", 00:17:30.308 "adrfam": "IPv4", 00:17:30.308 "traddr": "10.0.0.2", 00:17:30.308 "trsvcid": "4420", 00:17:30.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.308 "prchk_reftag": false, 00:17:30.308 "prchk_guard": false, 00:17:30.308 "ctrlr_loss_timeout_sec": 0, 00:17:30.308 "reconnect_delay_sec": 0, 00:17:30.308 "fast_io_fail_timeout_sec": 0, 00:17:30.308 "psk": "key0", 00:17:30.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.308 "hdgst": false, 00:17:30.308 "ddgst": false 00:17:30.308 } 00:17:30.308 }, 00:17:30.308 { 00:17:30.308 "method": "bdev_nvme_set_hotplug", 00:17:30.308 "params": { 00:17:30.308 "period_us": 100000, 00:17:30.308 "enable": false 00:17:30.308 } 00:17:30.308 }, 00:17:30.308 { 00:17:30.308 "method": "bdev_enable_histogram", 00:17:30.308 "params": { 00:17:30.308 "name": "nvme0n1", 00:17:30.308 "enable": true 00:17:30.308 } 00:17:30.308 }, 00:17:30.308 { 00:17:30.308 "method": "bdev_wait_for_examine" 00:17:30.308 } 00:17:30.308 ] 00:17:30.308 }, 00:17:30.308 { 00:17:30.308 "subsystem": "nbd", 00:17:30.308 "config": [] 00:17:30.308 } 00:17:30.308 ] 00:17:30.308 }' 00:17:30.308 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.308 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.308 13:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.308 [2024-07-25 13:46:27.309990] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:30.308 [2024-07-25 13:46:27.310106] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587617 ] 00:17:30.308 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.568 [2024-07-25 13:46:27.372923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.568 [2024-07-25 13:46:27.481749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.828 [2024-07-25 13:46:27.660673] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.394 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.395 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:31.395 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:31.395 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:17:31.654 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.654 13:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:31.654 Running I/O for 1 seconds... 00:17:33.030 00:17:33.030 Latency(us) 00:17:33.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.030 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:33.030 Verification LBA range: start 0x0 length 0x2000 00:17:33.030 nvme0n1 : 1.02 3436.99 13.43 0.00 0.00 36913.99 5679.79 46797.56 00:17:33.030 =================================================================================================================== 00:17:33.030 Total : 3436.99 13.43 0.00 0.00 36913.99 5679.79 46797.56 00:17:33.030 0 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:33.030 nvmf_trace.0 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:17:33.030 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 587617 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 587617 ']' 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 587617 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 587617 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 587617' 00:17:33.031 killing process with pid 587617 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 587617 00:17:33.031 Received shutdown signal, test time was about 1.000000 seconds 00:17:33.031 00:17:33.031 Latency(us) 00:17:33.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.031 =================================================================================================================== 00:17:33.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:33.031 13:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 587617 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.290 rmmod nvme_tcp 00:17:33.290 rmmod nvme_fabrics 00:17:33.290 rmmod nvme_keyring 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 587468 ']' 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 587468 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 587468 ']' 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 587468 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 587468 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 587468' 00:17:33.290 killing process with pid 587468 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 587468 00:17:33.290 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 587468 00:17:33.548 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:33.548 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:33.548 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:33.548 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.548 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.548 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.548 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.548 13:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.450 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.450 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.43YTnXYNR2 /tmp/tmp.0ZedZkEPxx /tmp/tmp.k7xfqTV6me 00:17:35.450 00:17:35.450 real 1m20.923s 00:17:35.450 user 2m8.953s 00:17:35.450 sys 0m26.030s 00:17:35.450 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.450 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.450 ************************************ 00:17:35.450 END TEST nvmf_tls 00:17:35.450 ************************************ 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:35.708 ************************************ 00:17:35.708 START TEST nvmf_fips 00:17:35.708 ************************************ 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:35.708 * Looking for test storage... 00:17:35.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:35.708 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:17:35.709 Error setting digest 00:17:35.709 0032F9B76D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:35.709 0032F9B76D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.709 13:46:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:38.240 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.240 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:38.241 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:38.241 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:38.241 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:38.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:17:38.241 00:17:38.241 --- 10.0.0.2 ping statistics --- 00:17:38.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.241 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:17:38.241 00:17:38.241 --- 10.0.0.1 ping statistics --- 00:17:38.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.241 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=589935 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 589935 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 589935 ']' 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.241 13:46:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:38.241 [2024-07-25 13:46:35.030768] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:38.241 [2024-07-25 13:46:35.030850] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.241 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.241 [2024-07-25 13:46:35.092637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.241 [2024-07-25 13:46:35.201762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.241 [2024-07-25 13:46:35.201825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.241 [2024-07-25 13:46:35.201839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.241 [2024-07-25 13:46:35.201850] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.241 [2024-07-25 13:46:35.201859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.241 [2024-07-25 13:46:35.201893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:39.178 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.437 [2024-07-25 13:46:36.306323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.437 [2024-07-25 13:46:36.322300] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:39.437 [2024-07-25 13:46:36.322548] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.437 [2024-07-25 13:46:36.353465] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:39.437 malloc0 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=590137 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 590137 /var/tmp/bdevperf.sock 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 590137 ']' 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.437 13:46:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:39.437 [2024-07-25 13:46:36.445799] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:39.437 [2024-07-25 13:46:36.445889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid590137 ] 00:17:39.696 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.696 [2024-07-25 13:46:36.506031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.696 [2024-07-25 13:46:36.611927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.633 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.633 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:17:40.633 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:40.893 [2024-07-25 13:46:37.688267] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.893 [2024-07-25 13:46:37.688422] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.893 TLSTESTn1 00:17:40.893 13:46:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:40.893 Running I/O for 10 seconds... 00:17:53.122 00:17:53.122 Latency(us) 00:17:53.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.122 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.122 Verification LBA range: start 0x0 length 0x2000 00:17:53.122 TLSTESTn1 : 10.02 3525.95 13.77 0.00 0.00 36243.01 6844.87 33981.63 00:17:53.122 =================================================================================================================== 00:17:53.122 Total : 3525.95 13.77 0.00 0.00 36243.01 6844.87 33981.63 00:17:53.122 0 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:17:53.122 13:46:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:53.122 nvmf_trace.0 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 590137 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 590137 ']' 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 590137 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 590137 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 590137' 00:17:53.122 killing process with pid 590137 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 590137 00:17:53.122 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.122 00:17:53.122 Latency(us) 00:17:53.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.122 =================================================================================================================== 00:17:53.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.122 [2024-07-25 13:46:48.049533] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:53.122 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 590137 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.123 rmmod nvme_tcp 00:17:53.123 rmmod nvme_fabrics 00:17:53.123 rmmod nvme_keyring 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 589935 ']' 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 589935 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 589935 ']' 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 589935 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 589935 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 589935' 00:17:53.123 killing process with pid 589935 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 589935 00:17:53.123 [2024-07-25 13:46:48.399580] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 589935 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.123 13:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.691 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:53.691 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:53.691 00:17:53.691 real 0m18.191s 00:17:53.691 user 0m24.406s 00:17:53.691 sys 0m5.408s 00:17:53.691 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:53.691 13:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:53.691 ************************************ 00:17:53.691 END TEST nvmf_fips 00:17:53.691 ************************************ 00:17:53.950 13:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:17:53.950 13:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:17:53.950 13:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:17:53.950 13:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:17:53.950 13:46:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:17:53.950 13:46:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:55.851 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:55.851 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:55.851 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:55.851 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.851 ************************************ 00:17:55.851 START TEST nvmf_perf_adq 00:17:55.851 ************************************ 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:55.851 * Looking for test storage... 00:17:55.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.851 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.852 13:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:17:58.386 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:58.387 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:58.387 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:58.387 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:58.387 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:17:58.387 13:46:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:17:58.646 13:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:00.549 13:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:05.827 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:05.827 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:05.827 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:05.828 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:05.828 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:05.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:18:05.828 00:18:05.828 --- 10.0.0.2 ping statistics --- 00:18:05.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.828 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:18:05.828 00:18:05.828 --- 10.0.0.1 ping statistics --- 00:18:05.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.828 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=596013 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 596013 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 596013 ']' 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:05.828 [2024-07-25 13:47:02.612760] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:05.828 [2024-07-25 13:47:02.612837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.828 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.828 [2024-07-25 13:47:02.675723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.828 [2024-07-25 13:47:02.782739] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.828 [2024-07-25 13:47:02.782785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.828 [2024-07-25 13:47:02.782810] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.828 [2024-07-25 13:47:02.782821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.828 [2024-07-25 13:47:02.782831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.828 [2024-07-25 13:47:02.782929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.828 [2024-07-25 13:47:02.783021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.828 [2024-07-25 13:47:02.783116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.828 [2024-07-25 13:47:02.783120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:18:05.828 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:05.829 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.829 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:05.829 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:05.829 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.087 13:47:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.087 [2024-07-25 13:47:02.999879] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.087 Malloc1 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.087 [2024-07-25 13:47:03.053214] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=596048 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:18:06.087 13:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:06.087 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:18:08.614 "tick_rate": 2700000000, 00:18:08.614 "poll_groups": [ 00:18:08.614 { 00:18:08.614 "name": "nvmf_tgt_poll_group_000", 00:18:08.614 "admin_qpairs": 1, 00:18:08.614 "io_qpairs": 1, 00:18:08.614 "current_admin_qpairs": 1, 00:18:08.614 "current_io_qpairs": 1, 00:18:08.614 "pending_bdev_io": 0, 00:18:08.614 "completed_nvme_io": 19880, 00:18:08.614 "transports": [ 00:18:08.614 { 00:18:08.614 "trtype": "TCP" 00:18:08.614 } 00:18:08.614 ] 00:18:08.614 }, 00:18:08.614 { 00:18:08.614 "name": "nvmf_tgt_poll_group_001", 00:18:08.614 "admin_qpairs": 0, 00:18:08.614 "io_qpairs": 1, 00:18:08.614 "current_admin_qpairs": 0, 00:18:08.614 "current_io_qpairs": 1, 00:18:08.614 "pending_bdev_io": 0, 00:18:08.614 "completed_nvme_io": 20028, 00:18:08.614 "transports": [ 00:18:08.614 { 00:18:08.614 "trtype": "TCP" 00:18:08.614 } 00:18:08.614 ] 00:18:08.614 }, 00:18:08.614 { 00:18:08.614 "name": "nvmf_tgt_poll_group_002", 00:18:08.614 "admin_qpairs": 0, 00:18:08.614 "io_qpairs": 1, 00:18:08.614 "current_admin_qpairs": 0, 00:18:08.614 "current_io_qpairs": 1, 00:18:08.614 "pending_bdev_io": 0, 00:18:08.614 "completed_nvme_io": 20860, 00:18:08.614 "transports": [ 00:18:08.614 { 00:18:08.614 "trtype": "TCP" 00:18:08.614 } 00:18:08.614 ] 00:18:08.614 }, 00:18:08.614 { 00:18:08.614 "name": "nvmf_tgt_poll_group_003", 00:18:08.614 "admin_qpairs": 0, 00:18:08.614 "io_qpairs": 1, 00:18:08.614 "current_admin_qpairs": 0, 00:18:08.614 "current_io_qpairs": 1, 00:18:08.614 "pending_bdev_io": 0, 00:18:08.614 "completed_nvme_io": 20377, 00:18:08.614 "transports": [ 00:18:08.614 { 00:18:08.614 "trtype": "TCP" 00:18:08.614 } 00:18:08.614 ] 00:18:08.614 } 00:18:08.614 ] 00:18:08.614 }' 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:18:08.614 13:47:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 596048 00:18:16.725 Initializing NVMe Controllers 00:18:16.725 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:16.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:16.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:16.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:16.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:16.725 Initialization complete. Launching workers. 00:18:16.725 ======================================================== 00:18:16.725 Latency(us) 00:18:16.725 Device Information : IOPS MiB/s Average min max 00:18:16.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10737.29 41.94 5962.93 2396.96 10076.76 00:18:16.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10433.30 40.76 6135.73 2553.35 9641.75 00:18:16.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10420.10 40.70 6143.39 2469.23 10343.02 00:18:16.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10628.00 41.52 6024.24 2501.10 9832.37 00:18:16.725 ======================================================== 00:18:16.725 Total : 42218.69 164.92 6065.61 2396.96 10343.02 00:18:16.725 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.725 rmmod nvme_tcp 00:18:16.725 rmmod nvme_fabrics 00:18:16.725 rmmod nvme_keyring 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 596013 ']' 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 596013 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 596013 ']' 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 596013 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 596013 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 596013' 00:18:16.725 killing process with pid 596013 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 596013 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 596013 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.725 13:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:18.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:18:18.630 13:47:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:19.197 13:47:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:21.724 13:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:27.018 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:27.018 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:27.018 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:27.018 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.018 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:27.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:18:27.019 00:18:27.019 --- 10.0.0.2 ping statistics --- 00:18:27.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.019 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:18:27.019 00:18:27.019 --- 10.0.0.1 ping statistics --- 00:18:27.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.019 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:27.019 net.core.busy_poll = 1 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:27.019 net.core.busy_read = 1 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=598687 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 598687 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 598687 ']' 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.019 [2024-07-25 13:47:23.604208] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:27.019 [2024-07-25 13:47:23.604292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.019 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.019 [2024-07-25 13:47:23.668882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.019 [2024-07-25 13:47:23.775116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.019 [2024-07-25 13:47:23.775184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.019 [2024-07-25 13:47:23.775206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.019 [2024-07-25 13:47:23.775218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.019 [2024-07-25 13:47:23.775228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.019 [2024-07-25 13:47:23.775277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.019 [2024-07-25 13:47:23.775338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.019 [2024-07-25 13:47:23.775406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.019 [2024-07-25 13:47:23.775409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.019 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.020 [2024-07-25 13:47:23.979146] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.020 13:47:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.020 Malloc1 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:27.020 [2024-07-25 13:47:24.030637] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=598813 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:27.020 13:47:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:18:27.277 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:18:29.174 "tick_rate": 2700000000, 00:18:29.174 "poll_groups": [ 00:18:29.174 { 00:18:29.174 "name": "nvmf_tgt_poll_group_000", 00:18:29.174 "admin_qpairs": 1, 00:18:29.174 "io_qpairs": 3, 00:18:29.174 "current_admin_qpairs": 1, 00:18:29.174 "current_io_qpairs": 3, 00:18:29.174 "pending_bdev_io": 0, 00:18:29.174 "completed_nvme_io": 25421, 00:18:29.174 "transports": [ 00:18:29.174 { 00:18:29.174 "trtype": "TCP" 00:18:29.174 } 00:18:29.174 ] 00:18:29.174 }, 00:18:29.174 { 00:18:29.174 "name": "nvmf_tgt_poll_group_001", 00:18:29.174 "admin_qpairs": 0, 00:18:29.174 "io_qpairs": 1, 00:18:29.174 "current_admin_qpairs": 0, 00:18:29.174 "current_io_qpairs": 1, 00:18:29.174 "pending_bdev_io": 0, 00:18:29.174 "completed_nvme_io": 26687, 00:18:29.174 "transports": [ 00:18:29.174 { 00:18:29.174 "trtype": "TCP" 00:18:29.174 } 00:18:29.174 ] 00:18:29.174 }, 00:18:29.174 { 00:18:29.174 "name": "nvmf_tgt_poll_group_002", 00:18:29.174 "admin_qpairs": 0, 00:18:29.174 "io_qpairs": 0, 00:18:29.174 "current_admin_qpairs": 0, 00:18:29.174 "current_io_qpairs": 0, 00:18:29.174 "pending_bdev_io": 0, 00:18:29.174 "completed_nvme_io": 0, 00:18:29.174 "transports": [ 00:18:29.174 { 00:18:29.174 "trtype": "TCP" 00:18:29.174 } 00:18:29.174 ] 00:18:29.174 }, 00:18:29.174 { 00:18:29.174 "name": "nvmf_tgt_poll_group_003", 00:18:29.174 "admin_qpairs": 0, 00:18:29.174 "io_qpairs": 0, 00:18:29.174 "current_admin_qpairs": 0, 00:18:29.174 "current_io_qpairs": 0, 00:18:29.174 "pending_bdev_io": 0, 00:18:29.174 "completed_nvme_io": 0, 00:18:29.174 "transports": [ 00:18:29.174 { 00:18:29.174 "trtype": "TCP" 00:18:29.174 } 00:18:29.174 ] 00:18:29.174 } 00:18:29.174 ] 00:18:29.174 }' 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:18:29.174 13:47:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 598813 00:18:37.311 Initializing NVMe Controllers 00:18:37.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:37.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:37.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:37.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:37.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:37.311 Initialization complete. Launching workers. 00:18:37.311 ======================================================== 00:18:37.311 Latency(us) 00:18:37.311 Device Information : IOPS MiB/s Average min max 00:18:37.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13984.66 54.63 4576.85 1777.03 7113.62 00:18:37.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4639.35 18.12 13825.57 2033.79 59095.82 00:18:37.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4214.96 16.46 15234.55 1882.71 63833.67 00:18:37.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4434.16 17.32 14435.48 1726.27 61916.54 00:18:37.311 ======================================================== 00:18:37.311 Total : 27273.13 106.54 9400.08 1726.27 63833.67 00:18:37.311 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:37.311 rmmod nvme_tcp 00:18:37.311 rmmod nvme_fabrics 00:18:37.311 rmmod nvme_keyring 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 598687 ']' 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 598687 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 598687 ']' 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 598687 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 598687 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 598687' 00:18:37.311 killing process with pid 598687 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 598687 00:18:37.311 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 598687 00:18:37.568 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:37.568 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:37.827 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:37.827 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.827 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.827 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.827 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.827 13:47:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:18:41.113 00:18:41.113 real 0m44.899s 00:18:41.113 user 2m37.948s 00:18:41.113 sys 0m10.246s 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:41.113 ************************************ 00:18:41.113 END TEST nvmf_perf_adq 00:18:41.113 ************************************ 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.113 ************************************ 00:18:41.113 START TEST nvmf_shutdown 00:18:41.113 ************************************ 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:41.113 * Looking for test storage... 00:18:41.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.113 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:41.114 ************************************ 00:18:41.114 START TEST nvmf_shutdown_tc1 00:18:41.114 ************************************ 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:41.114 13:47:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:43.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.014 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:43.015 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:43.015 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:43.015 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:43.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:18:43.015 00:18:43.015 --- 10.0.0.2 ping statistics --- 00:18:43.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.015 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:18:43.015 00:18:43.015 --- 10.0.0.1 ping statistics --- 00:18:43.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.015 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.015 13:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=602064 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 602064 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 602064 ']' 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.015 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:43.274 [2024-07-25 13:47:40.063488] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:43.274 [2024-07-25 13:47:40.063568] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.274 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.274 [2024-07-25 13:47:40.126435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.274 [2024-07-25 13:47:40.227735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.274 [2024-07-25 13:47:40.227790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.274 [2024-07-25 13:47:40.227813] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.274 [2024-07-25 13:47:40.227824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.274 [2024-07-25 13:47:40.227833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.274 [2024-07-25 13:47:40.227923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.274 [2024-07-25 13:47:40.228028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.274 [2024-07-25 13:47:40.228119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:43.274 [2024-07-25 13:47:40.228124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:43.532 [2024-07-25 13:47:40.384637] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.532 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:43.532 Malloc1 00:18:43.532 [2024-07-25 13:47:40.467743] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.532 Malloc2 00:18:43.532 Malloc3 00:18:43.789 Malloc4 00:18:43.789 Malloc5 00:18:43.789 Malloc6 00:18:43.789 Malloc7 00:18:43.789 Malloc8 00:18:44.048 Malloc9 00:18:44.048 Malloc10 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=602185 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 602185 /var/tmp/bdevperf.sock 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 602185 ']' 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.048 { 00:18:44.048 "params": { 00:18:44.048 "name": "Nvme$subsystem", 00:18:44.048 "trtype": "$TEST_TRANSPORT", 00:18:44.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.048 "adrfam": "ipv4", 00:18:44.048 "trsvcid": "$NVMF_PORT", 00:18:44.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.048 "hdgst": ${hdgst:-false}, 00:18:44.048 "ddgst": ${ddgst:-false} 00:18:44.048 }, 00:18:44.048 "method": "bdev_nvme_attach_controller" 00:18:44.048 } 00:18:44.048 EOF 00:18:44.048 )") 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.048 { 00:18:44.048 "params": { 00:18:44.048 "name": "Nvme$subsystem", 00:18:44.048 "trtype": "$TEST_TRANSPORT", 00:18:44.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.048 "adrfam": "ipv4", 00:18:44.048 "trsvcid": "$NVMF_PORT", 00:18:44.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.048 "hdgst": ${hdgst:-false}, 00:18:44.048 "ddgst": ${ddgst:-false} 00:18:44.048 }, 00:18:44.048 "method": "bdev_nvme_attach_controller" 00:18:44.048 } 00:18:44.048 EOF 00:18:44.048 )") 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.048 { 00:18:44.048 "params": { 00:18:44.048 "name": "Nvme$subsystem", 00:18:44.048 "trtype": "$TEST_TRANSPORT", 00:18:44.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.048 "adrfam": "ipv4", 00:18:44.048 "trsvcid": "$NVMF_PORT", 00:18:44.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.048 "hdgst": ${hdgst:-false}, 00:18:44.048 "ddgst": ${ddgst:-false} 00:18:44.048 }, 00:18:44.048 "method": "bdev_nvme_attach_controller" 00:18:44.048 } 00:18:44.048 EOF 00:18:44.048 )") 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.048 { 00:18:44.048 "params": { 00:18:44.048 "name": "Nvme$subsystem", 00:18:44.048 "trtype": "$TEST_TRANSPORT", 00:18:44.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.048 "adrfam": "ipv4", 00:18:44.048 "trsvcid": "$NVMF_PORT", 00:18:44.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.048 "hdgst": ${hdgst:-false}, 00:18:44.048 "ddgst": ${ddgst:-false} 00:18:44.048 }, 00:18:44.048 "method": "bdev_nvme_attach_controller" 00:18:44.048 } 00:18:44.048 EOF 00:18:44.048 )") 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.048 { 00:18:44.048 "params": { 00:18:44.048 "name": "Nvme$subsystem", 00:18:44.048 "trtype": "$TEST_TRANSPORT", 00:18:44.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.048 "adrfam": "ipv4", 00:18:44.048 "trsvcid": "$NVMF_PORT", 00:18:44.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.048 "hdgst": ${hdgst:-false}, 00:18:44.048 "ddgst": ${ddgst:-false} 00:18:44.048 }, 00:18:44.048 "method": "bdev_nvme_attach_controller" 00:18:44.048 } 00:18:44.048 EOF 00:18:44.048 )") 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.048 { 00:18:44.048 "params": { 00:18:44.048 "name": "Nvme$subsystem", 00:18:44.048 "trtype": "$TEST_TRANSPORT", 00:18:44.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.048 "adrfam": "ipv4", 00:18:44.048 "trsvcid": "$NVMF_PORT", 00:18:44.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.048 "hdgst": ${hdgst:-false}, 00:18:44.048 "ddgst": ${ddgst:-false} 00:18:44.048 }, 00:18:44.048 "method": "bdev_nvme_attach_controller" 00:18:44.048 } 00:18:44.048 EOF 00:18:44.048 )") 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.048 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.048 { 00:18:44.048 "params": { 00:18:44.048 "name": "Nvme$subsystem", 00:18:44.048 "trtype": "$TEST_TRANSPORT", 00:18:44.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "$NVMF_PORT", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.049 "hdgst": ${hdgst:-false}, 00:18:44.049 "ddgst": ${ddgst:-false} 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 } 00:18:44.049 EOF 00:18:44.049 )") 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.049 { 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme$subsystem", 00:18:44.049 "trtype": "$TEST_TRANSPORT", 00:18:44.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "$NVMF_PORT", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.049 "hdgst": ${hdgst:-false}, 00:18:44.049 "ddgst": ${ddgst:-false} 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 } 00:18:44.049 EOF 00:18:44.049 )") 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.049 { 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme$subsystem", 00:18:44.049 "trtype": "$TEST_TRANSPORT", 00:18:44.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "$NVMF_PORT", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.049 "hdgst": ${hdgst:-false}, 00:18:44.049 "ddgst": ${ddgst:-false} 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 } 00:18:44.049 EOF 00:18:44.049 )") 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.049 { 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme$subsystem", 00:18:44.049 "trtype": "$TEST_TRANSPORT", 00:18:44.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "$NVMF_PORT", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.049 "hdgst": ${hdgst:-false}, 00:18:44.049 "ddgst": ${ddgst:-false} 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 } 00:18:44.049 EOF 00:18:44.049 )") 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:18:44.049 13:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme1", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 },{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme2", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 },{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme3", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 },{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme4", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 },{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme5", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 },{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme6", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 },{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme7", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 },{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme8", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 },{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme9", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 },{ 00:18:44.049 "params": { 00:18:44.049 "name": "Nvme10", 00:18:44.049 "trtype": "tcp", 00:18:44.049 "traddr": "10.0.0.2", 00:18:44.049 "adrfam": "ipv4", 00:18:44.049 "trsvcid": "4420", 00:18:44.049 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:44.049 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:44.049 "hdgst": false, 00:18:44.049 "ddgst": false 00:18:44.049 }, 00:18:44.049 "method": "bdev_nvme_attach_controller" 00:18:44.049 }' 00:18:44.049 [2024-07-25 13:47:40.964278] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:44.050 [2024-07-25 13:47:40.964381] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:44.050 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.050 [2024-07-25 13:47:41.030532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.308 [2024-07-25 13:47:41.141745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.202 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.202 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:18:46.202 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:46.202 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.202 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:46.202 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.202 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 602185 00:18:46.202 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:18:46.202 13:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:18:47.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 602185 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 602064 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.135 { 00:18:47.135 "params": { 00:18:47.135 "name": "Nvme$subsystem", 00:18:47.135 "trtype": "$TEST_TRANSPORT", 00:18:47.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.135 "adrfam": "ipv4", 00:18:47.135 "trsvcid": "$NVMF_PORT", 00:18:47.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.135 "hdgst": ${hdgst:-false}, 00:18:47.135 "ddgst": ${ddgst:-false} 00:18:47.135 }, 00:18:47.135 "method": "bdev_nvme_attach_controller" 00:18:47.135 } 00:18:47.135 EOF 00:18:47.135 )") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.135 { 00:18:47.135 "params": { 00:18:47.135 "name": "Nvme$subsystem", 00:18:47.135 "trtype": "$TEST_TRANSPORT", 00:18:47.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.135 "adrfam": "ipv4", 00:18:47.135 "trsvcid": "$NVMF_PORT", 00:18:47.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.135 "hdgst": ${hdgst:-false}, 00:18:47.135 "ddgst": ${ddgst:-false} 00:18:47.135 }, 00:18:47.135 "method": "bdev_nvme_attach_controller" 00:18:47.135 } 00:18:47.135 EOF 00:18:47.135 )") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.135 { 00:18:47.135 "params": { 00:18:47.135 "name": "Nvme$subsystem", 00:18:47.135 "trtype": "$TEST_TRANSPORT", 00:18:47.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.135 "adrfam": "ipv4", 00:18:47.135 "trsvcid": "$NVMF_PORT", 00:18:47.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.135 "hdgst": ${hdgst:-false}, 00:18:47.135 "ddgst": ${ddgst:-false} 00:18:47.135 }, 00:18:47.135 "method": "bdev_nvme_attach_controller" 00:18:47.135 } 00:18:47.135 EOF 00:18:47.135 )") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.135 { 00:18:47.135 "params": { 00:18:47.135 "name": "Nvme$subsystem", 00:18:47.135 "trtype": "$TEST_TRANSPORT", 00:18:47.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.135 "adrfam": "ipv4", 00:18:47.135 "trsvcid": "$NVMF_PORT", 00:18:47.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.135 "hdgst": ${hdgst:-false}, 00:18:47.135 "ddgst": ${ddgst:-false} 00:18:47.135 }, 00:18:47.135 "method": "bdev_nvme_attach_controller" 00:18:47.135 } 00:18:47.135 EOF 00:18:47.135 )") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.135 { 00:18:47.135 "params": { 00:18:47.135 "name": "Nvme$subsystem", 00:18:47.135 "trtype": "$TEST_TRANSPORT", 00:18:47.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.135 "adrfam": "ipv4", 00:18:47.135 "trsvcid": "$NVMF_PORT", 00:18:47.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.135 "hdgst": ${hdgst:-false}, 00:18:47.135 "ddgst": ${ddgst:-false} 00:18:47.135 }, 00:18:47.135 "method": "bdev_nvme_attach_controller" 00:18:47.135 } 00:18:47.135 EOF 00:18:47.135 )") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.135 { 00:18:47.135 "params": { 00:18:47.135 "name": "Nvme$subsystem", 00:18:47.135 "trtype": "$TEST_TRANSPORT", 00:18:47.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.135 "adrfam": "ipv4", 00:18:47.135 "trsvcid": "$NVMF_PORT", 00:18:47.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.135 "hdgst": ${hdgst:-false}, 00:18:47.135 "ddgst": ${ddgst:-false} 00:18:47.135 }, 00:18:47.135 "method": "bdev_nvme_attach_controller" 00:18:47.135 } 00:18:47.135 EOF 00:18:47.135 )") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.135 { 00:18:47.135 "params": { 00:18:47.135 "name": "Nvme$subsystem", 00:18:47.135 "trtype": "$TEST_TRANSPORT", 00:18:47.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.135 "adrfam": "ipv4", 00:18:47.135 "trsvcid": "$NVMF_PORT", 00:18:47.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.135 "hdgst": ${hdgst:-false}, 00:18:47.135 "ddgst": ${ddgst:-false} 00:18:47.135 }, 00:18:47.135 "method": "bdev_nvme_attach_controller" 00:18:47.135 } 00:18:47.135 EOF 00:18:47.135 )") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.135 { 00:18:47.135 "params": { 00:18:47.135 "name": "Nvme$subsystem", 00:18:47.135 "trtype": "$TEST_TRANSPORT", 00:18:47.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.135 "adrfam": "ipv4", 00:18:47.135 "trsvcid": "$NVMF_PORT", 00:18:47.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.135 "hdgst": ${hdgst:-false}, 00:18:47.135 "ddgst": ${ddgst:-false} 00:18:47.135 }, 00:18:47.135 "method": "bdev_nvme_attach_controller" 00:18:47.135 } 00:18:47.135 EOF 00:18:47.135 )") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.135 { 00:18:47.135 "params": { 00:18:47.135 "name": "Nvme$subsystem", 00:18:47.135 "trtype": "$TEST_TRANSPORT", 00:18:47.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.135 "adrfam": "ipv4", 00:18:47.135 "trsvcid": "$NVMF_PORT", 00:18:47.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.135 "hdgst": ${hdgst:-false}, 00:18:47.135 "ddgst": ${ddgst:-false} 00:18:47.135 }, 00:18:47.135 "method": "bdev_nvme_attach_controller" 00:18:47.135 } 00:18:47.135 EOF 00:18:47.135 )") 00:18:47.135 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.136 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.136 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.136 { 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme$subsystem", 00:18:47.136 "trtype": "$TEST_TRANSPORT", 00:18:47.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "$NVMF_PORT", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.136 "hdgst": ${hdgst:-false}, 00:18:47.136 "ddgst": ${ddgst:-false} 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 } 00:18:47.136 EOF 00:18:47.136 )") 00:18:47.136 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:18:47.136 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:18:47.136 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:18:47.136 13:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme1", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 },{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme2", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 },{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme3", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 },{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme4", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 },{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme5", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 },{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme6", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 },{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme7", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 },{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme8", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 },{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme9", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 },{ 00:18:47.136 "params": { 00:18:47.136 "name": "Nvme10", 00:18:47.136 "trtype": "tcp", 00:18:47.136 "traddr": "10.0.0.2", 00:18:47.136 "adrfam": "ipv4", 00:18:47.136 "trsvcid": "4420", 00:18:47.136 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:47.136 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:47.136 "hdgst": false, 00:18:47.136 "ddgst": false 00:18:47.136 }, 00:18:47.136 "method": "bdev_nvme_attach_controller" 00:18:47.136 }' 00:18:47.136 [2024-07-25 13:47:43.977317] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:47.136 [2024-07-25 13:47:43.977421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602603 ] 00:18:47.136 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.136 [2024-07-25 13:47:44.041863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.136 [2024-07-25 13:47:44.151615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.509 Running I/O for 1 seconds... 00:18:49.883 00:18:49.883 Latency(us) 00:18:49.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.883 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.883 Verification LBA range: start 0x0 length 0x400 00:18:49.883 Nvme1n1 : 1.10 233.65 14.60 0.00 0.00 270413.75 19126.80 251658.24 00:18:49.883 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.883 Verification LBA range: start 0x0 length 0x400 00:18:49.883 Nvme2n1 : 1.15 221.93 13.87 0.00 0.00 281113.41 20291.89 259425.47 00:18:49.883 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.883 Verification LBA range: start 0x0 length 0x400 00:18:49.883 Nvme3n1 : 1.17 273.71 17.11 0.00 0.00 221412.62 16214.09 246997.90 00:18:49.883 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.883 Verification LBA range: start 0x0 length 0x400 00:18:49.883 Nvme4n1 : 1.09 234.93 14.68 0.00 0.00 255844.50 17379.18 253211.69 00:18:49.883 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.883 Verification LBA range: start 0x0 length 0x400 00:18:49.884 Nvme5n1 : 1.16 221.19 13.82 0.00 0.00 268299.76 22039.51 257872.02 00:18:49.884 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.884 Verification LBA range: start 0x0 length 0x400 00:18:49.884 Nvme6n1 : 1.17 218.45 13.65 0.00 0.00 267340.04 23204.60 254765.13 00:18:49.884 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.884 Verification LBA range: start 0x0 length 0x400 00:18:49.884 Nvme7n1 : 1.19 269.30 16.83 0.00 0.00 213355.94 17670.45 254765.13 00:18:49.884 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.884 Verification LBA range: start 0x0 length 0x400 00:18:49.884 Nvme8n1 : 1.19 269.98 16.87 0.00 0.00 208657.67 15340.28 250104.79 00:18:49.884 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.884 Verification LBA range: start 0x0 length 0x400 00:18:49.884 Nvme9n1 : 1.17 219.18 13.70 0.00 0.00 252936.15 19709.35 264085.81 00:18:49.884 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:49.884 Verification LBA range: start 0x0 length 0x400 00:18:49.884 Nvme10n1 : 1.18 217.10 13.57 0.00 0.00 251318.04 22816.24 282727.16 00:18:49.884 =================================================================================================================== 00:18:49.884 Total : 2379.42 148.71 0.00 0.00 246655.67 15340.28 282727.16 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.884 rmmod nvme_tcp 00:18:49.884 rmmod nvme_fabrics 00:18:49.884 rmmod nvme_keyring 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 602064 ']' 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 602064 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 602064 ']' 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 602064 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.884 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 602064 00:18:50.142 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:50.142 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:50.142 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 602064' 00:18:50.142 killing process with pid 602064 00:18:50.142 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 602064 00:18:50.142 13:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 602064 00:18:50.709 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:50.709 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:50.709 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:50.709 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.709 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.709 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.709 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.709 13:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:52.615 00:18:52.615 real 0m11.676s 00:18:52.615 user 0m33.173s 00:18:52.615 sys 0m3.251s 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:52.615 ************************************ 00:18:52.615 END TEST nvmf_shutdown_tc1 00:18:52.615 ************************************ 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:52.615 ************************************ 00:18:52.615 START TEST nvmf_shutdown_tc2 00:18:52.615 ************************************ 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.615 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:52.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:52.616 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:52.616 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:52.616 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:52.616 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:52.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:18:52.875 00:18:52.875 --- 10.0.0.2 ping statistics --- 00:18:52.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.875 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:18:52.875 00:18:52.875 --- 10.0.0.1 ping statistics --- 00:18:52.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.875 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=603368 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 603368 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 603368 ']' 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.875 13:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:52.875 [2024-07-25 13:47:49.775735] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:52.875 [2024-07-25 13:47:49.775826] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.875 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.875 [2024-07-25 13:47:49.842268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.134 [2024-07-25 13:47:49.954015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.134 [2024-07-25 13:47:49.954096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.134 [2024-07-25 13:47:49.954111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.134 [2024-07-25 13:47:49.954122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.134 [2024-07-25 13:47:49.954132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.134 [2024-07-25 13:47:49.954229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.134 [2024-07-25 13:47:49.954291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.134 [2024-07-25 13:47:49.954345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:53.134 [2024-07-25 13:47:49.954348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:53.134 [2024-07-25 13:47:50.100383] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.134 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:53.134 Malloc1 00:18:53.393 [2024-07-25 13:47:50.185931] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.393 Malloc2 00:18:53.393 Malloc3 00:18:53.393 Malloc4 00:18:53.393 Malloc5 00:18:53.393 Malloc6 00:18:53.652 Malloc7 00:18:53.652 Malloc8 00:18:53.652 Malloc9 00:18:53.652 Malloc10 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=603545 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 603545 /var/tmp/bdevperf.sock 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 603545 ']' 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.652 { 00:18:53.652 "params": { 00:18:53.652 "name": "Nvme$subsystem", 00:18:53.652 "trtype": "$TEST_TRANSPORT", 00:18:53.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.652 "adrfam": "ipv4", 00:18:53.652 "trsvcid": "$NVMF_PORT", 00:18:53.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.652 "hdgst": ${hdgst:-false}, 00:18:53.652 "ddgst": ${ddgst:-false} 00:18:53.652 }, 00:18:53.652 "method": "bdev_nvme_attach_controller" 00:18:53.652 } 00:18:53.652 EOF 00:18:53.652 )") 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.652 { 00:18:53.652 "params": { 00:18:53.652 "name": "Nvme$subsystem", 00:18:53.652 "trtype": "$TEST_TRANSPORT", 00:18:53.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.652 "adrfam": "ipv4", 00:18:53.652 "trsvcid": "$NVMF_PORT", 00:18:53.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.652 "hdgst": ${hdgst:-false}, 00:18:53.652 "ddgst": ${ddgst:-false} 00:18:53.652 }, 00:18:53.652 "method": "bdev_nvme_attach_controller" 00:18:53.652 } 00:18:53.652 EOF 00:18:53.652 )") 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.652 { 00:18:53.652 "params": { 00:18:53.652 "name": "Nvme$subsystem", 00:18:53.652 "trtype": "$TEST_TRANSPORT", 00:18:53.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.652 "adrfam": "ipv4", 00:18:53.652 "trsvcid": "$NVMF_PORT", 00:18:53.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.652 "hdgst": ${hdgst:-false}, 00:18:53.652 "ddgst": ${ddgst:-false} 00:18:53.652 }, 00:18:53.652 "method": "bdev_nvme_attach_controller" 00:18:53.652 } 00:18:53.652 EOF 00:18:53.652 )") 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.652 { 00:18:53.652 "params": { 00:18:53.652 "name": "Nvme$subsystem", 00:18:53.652 "trtype": "$TEST_TRANSPORT", 00:18:53.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.652 "adrfam": "ipv4", 00:18:53.652 "trsvcid": "$NVMF_PORT", 00:18:53.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.652 "hdgst": ${hdgst:-false}, 00:18:53.652 "ddgst": ${ddgst:-false} 00:18:53.652 }, 00:18:53.652 "method": "bdev_nvme_attach_controller" 00:18:53.652 } 00:18:53.652 EOF 00:18:53.652 )") 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.652 { 00:18:53.652 "params": { 00:18:53.652 "name": "Nvme$subsystem", 00:18:53.652 "trtype": "$TEST_TRANSPORT", 00:18:53.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.652 "adrfam": "ipv4", 00:18:53.652 "trsvcid": "$NVMF_PORT", 00:18:53.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.652 "hdgst": ${hdgst:-false}, 00:18:53.652 "ddgst": ${ddgst:-false} 00:18:53.652 }, 00:18:53.652 "method": "bdev_nvme_attach_controller" 00:18:53.652 } 00:18:53.652 EOF 00:18:53.652 )") 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.652 { 00:18:53.652 "params": { 00:18:53.652 "name": "Nvme$subsystem", 00:18:53.652 "trtype": "$TEST_TRANSPORT", 00:18:53.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.652 "adrfam": "ipv4", 00:18:53.652 "trsvcid": "$NVMF_PORT", 00:18:53.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.652 "hdgst": ${hdgst:-false}, 00:18:53.652 "ddgst": ${ddgst:-false} 00:18:53.652 }, 00:18:53.652 "method": "bdev_nvme_attach_controller" 00:18:53.652 } 00:18:53.652 EOF 00:18:53.652 )") 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.652 { 00:18:53.652 "params": { 00:18:53.652 "name": "Nvme$subsystem", 00:18:53.652 "trtype": "$TEST_TRANSPORT", 00:18:53.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.652 "adrfam": "ipv4", 00:18:53.652 "trsvcid": "$NVMF_PORT", 00:18:53.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.652 "hdgst": ${hdgst:-false}, 00:18:53.652 "ddgst": ${ddgst:-false} 00:18:53.652 }, 00:18:53.652 "method": "bdev_nvme_attach_controller" 00:18:53.652 } 00:18:53.652 EOF 00:18:53.652 )") 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.652 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.652 { 00:18:53.652 "params": { 00:18:53.652 "name": "Nvme$subsystem", 00:18:53.652 "trtype": "$TEST_TRANSPORT", 00:18:53.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.653 "adrfam": "ipv4", 00:18:53.653 "trsvcid": "$NVMF_PORT", 00:18:53.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.653 "hdgst": ${hdgst:-false}, 00:18:53.653 "ddgst": ${ddgst:-false} 00:18:53.653 }, 00:18:53.653 "method": "bdev_nvme_attach_controller" 00:18:53.653 } 00:18:53.653 EOF 00:18:53.653 )") 00:18:53.653 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.653 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.653 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.653 { 00:18:53.653 "params": { 00:18:53.653 "name": "Nvme$subsystem", 00:18:53.653 "trtype": "$TEST_TRANSPORT", 00:18:53.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.653 "adrfam": "ipv4", 00:18:53.653 "trsvcid": "$NVMF_PORT", 00:18:53.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.653 "hdgst": ${hdgst:-false}, 00:18:53.653 "ddgst": ${ddgst:-false} 00:18:53.653 }, 00:18:53.653 "method": "bdev_nvme_attach_controller" 00:18:53.653 } 00:18:53.653 EOF 00:18:53.653 )") 00:18:53.653 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.653 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:53.653 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:53.653 { 00:18:53.653 "params": { 00:18:53.653 "name": "Nvme$subsystem", 00:18:53.653 "trtype": "$TEST_TRANSPORT", 00:18:53.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.653 "adrfam": "ipv4", 00:18:53.653 "trsvcid": "$NVMF_PORT", 00:18:53.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.653 "hdgst": ${hdgst:-false}, 00:18:53.653 "ddgst": ${ddgst:-false} 00:18:53.653 }, 00:18:53.653 "method": "bdev_nvme_attach_controller" 00:18:53.653 } 00:18:53.653 EOF 00:18:53.653 )") 00:18:53.653 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:18:53.653 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:18:53.911 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:18:53.911 13:47:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme1", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 },{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme2", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 },{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme3", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 },{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme4", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 },{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme5", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 },{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme6", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 },{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme7", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 },{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme8", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 },{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme9", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 },{ 00:18:53.911 "params": { 00:18:53.911 "name": "Nvme10", 00:18:53.911 "trtype": "tcp", 00:18:53.911 "traddr": "10.0.0.2", 00:18:53.911 "adrfam": "ipv4", 00:18:53.911 "trsvcid": "4420", 00:18:53.911 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:53.911 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:53.911 "hdgst": false, 00:18:53.911 "ddgst": false 00:18:53.911 }, 00:18:53.911 "method": "bdev_nvme_attach_controller" 00:18:53.911 }' 00:18:53.911 [2024-07-25 13:47:50.696615] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:53.911 [2024-07-25 13:47:50.696703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603545 ] 00:18:53.911 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.911 [2024-07-25 13:47:50.760659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.912 [2024-07-25 13:47:50.870210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.807 Running I/O for 10 seconds... 00:18:55.807 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:55.807 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:18:55.807 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:55.807 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.807 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:18:56.064 13:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:18:56.323 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=135 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 603545 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 603545 ']' 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 603545 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 603545 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 603545' 00:18:56.580 killing process with pid 603545 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 603545 00:18:56.580 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 603545 00:18:56.838 Received shutdown signal, test time was about 0.902382 seconds 00:18:56.838 00:18:56.838 Latency(us) 00:18:56.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.838 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme1n1 : 0.90 284.10 17.76 0.00 0.00 221482.86 6844.87 254765.13 00:18:56.838 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme2n1 : 0.89 216.85 13.55 0.00 0.00 285531.15 21068.61 257872.02 00:18:56.838 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme3n1 : 0.90 284.80 17.80 0.00 0.00 212738.47 21554.06 243891.01 00:18:56.838 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme4n1 : 0.85 225.31 14.08 0.00 0.00 261765.25 18641.35 253211.69 00:18:56.838 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme5n1 : 0.86 222.66 13.92 0.00 0.00 259502.84 22233.69 248551.35 00:18:56.838 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme6n1 : 0.87 221.29 13.83 0.00 0.00 255503.42 33010.73 237677.23 00:18:56.838 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme7n1 : 0.87 219.93 13.75 0.00 0.00 251342.70 20583.16 254765.13 00:18:56.838 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme8n1 : 0.88 218.05 13.63 0.00 0.00 247966.28 22330.79 254765.13 00:18:56.838 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme9n1 : 0.89 215.67 13.48 0.00 0.00 245301.35 20486.07 259425.47 00:18:56.838 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:56.838 Verification LBA range: start 0x0 length 0x400 00:18:56.838 Nvme10n1 : 0.89 214.73 13.42 0.00 0.00 240719.64 22330.79 284280.60 00:18:56.838 =================================================================================================================== 00:18:56.838 Total : 2323.39 145.21 0.00 0.00 246243.22 6844.87 284280.60 00:18:57.097 13:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 603368 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.034 rmmod nvme_tcp 00:18:58.034 rmmod nvme_fabrics 00:18:58.034 rmmod nvme_keyring 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 603368 ']' 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 603368 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 603368 ']' 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 603368 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:18:58.034 13:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:58.034 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 603368 00:18:58.034 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:58.034 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:58.034 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 603368' 00:18:58.034 killing process with pid 603368 00:18:58.034 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 603368 00:18:58.034 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 603368 00:18:58.604 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.604 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.604 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.604 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.604 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.604 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.604 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.604 13:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:01.143 00:19:01.143 real 0m8.082s 00:19:01.143 user 0m24.981s 00:19:01.143 sys 0m1.485s 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:01.143 ************************************ 00:19:01.143 END TEST nvmf_shutdown_tc2 00:19:01.143 ************************************ 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:01.143 ************************************ 00:19:01.143 START TEST nvmf_shutdown_tc3 00:19:01.143 ************************************ 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:01.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:01.143 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.143 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:01.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:01.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:01.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:19:01.144 00:19:01.144 --- 10.0.0.2 ping statistics --- 00:19:01.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.144 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:19:01.144 00:19:01.144 --- 10.0.0.1 ping statistics --- 00:19:01.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.144 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=604467 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 604467 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 604467 ']' 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.144 13:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:01.144 [2024-07-25 13:47:57.905687] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:01.144 [2024-07-25 13:47:57.905763] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.144 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.144 [2024-07-25 13:47:57.968190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.144 [2024-07-25 13:47:58.070099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.144 [2024-07-25 13:47:58.070169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.144 [2024-07-25 13:47:58.070190] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.144 [2024-07-25 13:47:58.070201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.144 [2024-07-25 13:47:58.070218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.144 [2024-07-25 13:47:58.070298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.145 [2024-07-25 13:47:58.070372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.145 [2024-07-25 13:47:58.070430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:01.145 [2024-07-25 13:47:58.070432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:01.405 [2024-07-25 13:47:58.219595] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.405 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.406 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:01.406 Malloc1 00:19:01.406 [2024-07-25 13:47:58.303685] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.406 Malloc2 00:19:01.406 Malloc3 00:19:01.406 Malloc4 00:19:01.666 Malloc5 00:19:01.666 Malloc6 00:19:01.666 Malloc7 00:19:01.666 Malloc8 00:19:01.666 Malloc9 00:19:01.926 Malloc10 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=604645 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 604645 /var/tmp/bdevperf.sock 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 604645 ']' 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.926 { 00:19:01.926 "params": { 00:19:01.926 "name": "Nvme$subsystem", 00:19:01.926 "trtype": "$TEST_TRANSPORT", 00:19:01.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.926 "adrfam": "ipv4", 00:19:01.926 "trsvcid": "$NVMF_PORT", 00:19:01.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.926 "hdgst": ${hdgst:-false}, 00:19:01.926 "ddgst": ${ddgst:-false} 00:19:01.926 }, 00:19:01.926 "method": "bdev_nvme_attach_controller" 00:19:01.926 } 00:19:01.926 EOF 00:19:01.926 )") 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.926 { 00:19:01.926 "params": { 00:19:01.926 "name": "Nvme$subsystem", 00:19:01.926 "trtype": "$TEST_TRANSPORT", 00:19:01.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.926 "adrfam": "ipv4", 00:19:01.926 "trsvcid": "$NVMF_PORT", 00:19:01.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.926 "hdgst": ${hdgst:-false}, 00:19:01.926 "ddgst": ${ddgst:-false} 00:19:01.926 }, 00:19:01.926 "method": "bdev_nvme_attach_controller" 00:19:01.926 } 00:19:01.926 EOF 00:19:01.926 )") 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.926 { 00:19:01.926 "params": { 00:19:01.926 "name": "Nvme$subsystem", 00:19:01.926 "trtype": "$TEST_TRANSPORT", 00:19:01.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.926 "adrfam": "ipv4", 00:19:01.926 "trsvcid": "$NVMF_PORT", 00:19:01.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.926 "hdgst": ${hdgst:-false}, 00:19:01.926 "ddgst": ${ddgst:-false} 00:19:01.926 }, 00:19:01.926 "method": "bdev_nvme_attach_controller" 00:19:01.926 } 00:19:01.926 EOF 00:19:01.926 )") 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.926 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.926 { 00:19:01.926 "params": { 00:19:01.926 "name": "Nvme$subsystem", 00:19:01.926 "trtype": "$TEST_TRANSPORT", 00:19:01.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.926 "adrfam": "ipv4", 00:19:01.926 "trsvcid": "$NVMF_PORT", 00:19:01.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.926 "hdgst": ${hdgst:-false}, 00:19:01.927 "ddgst": ${ddgst:-false} 00:19:01.927 }, 00:19:01.927 "method": "bdev_nvme_attach_controller" 00:19:01.927 } 00:19:01.927 EOF 00:19:01.927 )") 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.927 { 00:19:01.927 "params": { 00:19:01.927 "name": "Nvme$subsystem", 00:19:01.927 "trtype": "$TEST_TRANSPORT", 00:19:01.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.927 "adrfam": "ipv4", 00:19:01.927 "trsvcid": "$NVMF_PORT", 00:19:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.927 "hdgst": ${hdgst:-false}, 00:19:01.927 "ddgst": ${ddgst:-false} 00:19:01.927 }, 00:19:01.927 "method": "bdev_nvme_attach_controller" 00:19:01.927 } 00:19:01.927 EOF 00:19:01.927 )") 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.927 { 00:19:01.927 "params": { 00:19:01.927 "name": "Nvme$subsystem", 00:19:01.927 "trtype": "$TEST_TRANSPORT", 00:19:01.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.927 "adrfam": "ipv4", 00:19:01.927 "trsvcid": "$NVMF_PORT", 00:19:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.927 "hdgst": ${hdgst:-false}, 00:19:01.927 "ddgst": ${ddgst:-false} 00:19:01.927 }, 00:19:01.927 "method": "bdev_nvme_attach_controller" 00:19:01.927 } 00:19:01.927 EOF 00:19:01.927 )") 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.927 { 00:19:01.927 "params": { 00:19:01.927 "name": "Nvme$subsystem", 00:19:01.927 "trtype": "$TEST_TRANSPORT", 00:19:01.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.927 "adrfam": "ipv4", 00:19:01.927 "trsvcid": "$NVMF_PORT", 00:19:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.927 "hdgst": ${hdgst:-false}, 00:19:01.927 "ddgst": ${ddgst:-false} 00:19:01.927 }, 00:19:01.927 "method": "bdev_nvme_attach_controller" 00:19:01.927 } 00:19:01.927 EOF 00:19:01.927 )") 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.927 { 00:19:01.927 "params": { 00:19:01.927 "name": "Nvme$subsystem", 00:19:01.927 "trtype": "$TEST_TRANSPORT", 00:19:01.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.927 "adrfam": "ipv4", 00:19:01.927 "trsvcid": "$NVMF_PORT", 00:19:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.927 "hdgst": ${hdgst:-false}, 00:19:01.927 "ddgst": ${ddgst:-false} 00:19:01.927 }, 00:19:01.927 "method": "bdev_nvme_attach_controller" 00:19:01.927 } 00:19:01.927 EOF 00:19:01.927 )") 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.927 { 00:19:01.927 "params": { 00:19:01.927 "name": "Nvme$subsystem", 00:19:01.927 "trtype": "$TEST_TRANSPORT", 00:19:01.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.927 "adrfam": "ipv4", 00:19:01.927 "trsvcid": "$NVMF_PORT", 00:19:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.927 "hdgst": ${hdgst:-false}, 00:19:01.927 "ddgst": ${ddgst:-false} 00:19:01.927 }, 00:19:01.927 "method": "bdev_nvme_attach_controller" 00:19:01.927 } 00:19:01.927 EOF 00:19:01.927 )") 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:01.927 { 00:19:01.927 "params": { 00:19:01.927 "name": "Nvme$subsystem", 00:19:01.927 "trtype": "$TEST_TRANSPORT", 00:19:01.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.927 "adrfam": "ipv4", 00:19:01.927 "trsvcid": "$NVMF_PORT", 00:19:01.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.927 "hdgst": ${hdgst:-false}, 00:19:01.927 "ddgst": ${ddgst:-false} 00:19:01.927 }, 00:19:01.927 "method": "bdev_nvme_attach_controller" 00:19:01.927 } 00:19:01.927 EOF 00:19:01.927 )") 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:01.927 13:47:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:01.927 "params": { 00:19:01.927 "name": "Nvme1", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 },{ 00:19:01.928 "params": { 00:19:01.928 "name": "Nvme2", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 },{ 00:19:01.928 "params": { 00:19:01.928 "name": "Nvme3", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 },{ 00:19:01.928 "params": { 00:19:01.928 "name": "Nvme4", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 },{ 00:19:01.928 "params": { 00:19:01.928 "name": "Nvme5", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 },{ 00:19:01.928 "params": { 00:19:01.928 "name": "Nvme6", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 },{ 00:19:01.928 "params": { 00:19:01.928 "name": "Nvme7", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 },{ 00:19:01.928 "params": { 00:19:01.928 "name": "Nvme8", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 },{ 00:19:01.928 "params": { 00:19:01.928 "name": "Nvme9", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 },{ 00:19:01.928 "params": { 00:19:01.928 "name": "Nvme10", 00:19:01.928 "trtype": "tcp", 00:19:01.928 "traddr": "10.0.0.2", 00:19:01.928 "adrfam": "ipv4", 00:19:01.928 "trsvcid": "4420", 00:19:01.928 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:01.928 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:01.928 "hdgst": false, 00:19:01.928 "ddgst": false 00:19:01.928 }, 00:19:01.928 "method": "bdev_nvme_attach_controller" 00:19:01.928 }' 00:19:01.928 [2024-07-25 13:47:58.825814] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:01.928 [2024-07-25 13:47:58.825899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604645 ] 00:19:01.928 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.928 [2024-07-25 13:47:58.888593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.186 [2024-07-25 13:47:58.998758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.562 Running I/O for 10 seconds... 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:03.820 13:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:04.079 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:04.079 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:04.079 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:04.079 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:04.079 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.079 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:04.338 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.338 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:04.338 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:04.338 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 604467 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 604467 ']' 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 604467 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 604467 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 604467' 00:19:04.615 killing process with pid 604467 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 604467 00:19:04.615 13:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 604467 00:19:04.615 [2024-07-25 13:48:01.454970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.615 [2024-07-25 13:48:01.455233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.455471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891920 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.456999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.616 [2024-07-25 13:48:01.457267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.457446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894440 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.458992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.617 [2024-07-25 13:48:01.459385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.459609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891de0 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.461911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.461953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.461981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.461997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with [2024-07-25 13:48:01.462379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1the state(5) to be set 00:19:04.618 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.462413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1[2024-07-25 13:48:01.462415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.462429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 13:48:01.462429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.462445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.462447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.618 [2024-07-25 13:48:01.462458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.618 [2024-07-25 13:48:01.462461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.618 [2024-07-25 13:48:01.462471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 [2024-07-25 13:48:01.462496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:1[2024-07-25 13:48:01.462508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 13:48:01.462522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with [2024-07-25 13:48:01.462539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1the state(5) to be set 00:19:04.619 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with [2024-07-25 13:48:01.462558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:04.619 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 [2024-07-25 13:48:01.462571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 [2024-07-25 13:48:01.462596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 [2024-07-25 13:48:01.462622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with [2024-07-25 13:48:01.462635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1the state(5) to be set 00:19:04.619 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 [2024-07-25 13:48:01.462661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 [2024-07-25 13:48:01.462686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 13:48:01.462712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with [2024-07-25 13:48:01.462728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1the state(5) to be set 00:19:04.619 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with [2024-07-25 13:48:01.462744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:19:04.619 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 [2024-07-25 13:48:01.462757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 [2024-07-25 13:48:01.462783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 13:48:01.462808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 [2024-07-25 13:48:01.462847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.619 [2024-07-25 13:48:01.462859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 13:48:01.462871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.619 the state(5) to be set 00:19:04.619 [2024-07-25 13:48:01.462884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.462886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.462896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.462901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.462909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.462917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.462925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.462931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.462938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.462947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.462951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.462961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.462963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.462977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with [2024-07-25 13:48:01.462977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1the state(5) to be set 00:19:04.620 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.462991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.462993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.463016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.463042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-07-25 13:48:01.463078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 13:48:01.463093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.463129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.463155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with [2024-07-25 13:48:01.463180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1the state(5) to be set 00:19:04.620 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.463195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.463220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892780 is same with the state(5) to be set 00:19:04.620 [2024-07-25 13:48:01.463243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.463258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.463286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.620 [2024-07-25 13:48:01.463302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.620 [2024-07-25 13:48:01.463315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.621 [2024-07-25 13:48:01.463875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.621 [2024-07-25 13:48:01.463889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.622 [2024-07-25 13:48:01.463904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.622 [2024-07-25 13:48:01.463917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.622 [2024-07-25 13:48:01.463933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.622 [2024-07-25 13:48:01.463946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.622 [2024-07-25 13:48:01.463991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:04.622 [2024-07-25 13:48:01.464000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464076] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2885790 was disconnected and freed. reset controller. 00:19:04.622 [2024-07-25 13:48:01.464085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.622 [2024-07-25 13:48:01.464716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.464729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.464746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.464759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.464771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.464783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.464796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.464808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.464820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c40 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.465043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dfba0 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.465263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2727f00 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.465427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2721b50 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.465592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x272d4a0 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.465754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.623 [2024-07-25 13:48:01.465869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26fd830 is same with the state(5) to be set 00:19:04.623 [2024-07-25 13:48:01.465913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.623 [2024-07-25 13:48:01.465933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.624 [2024-07-25 13:48:01.465948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.624 [2024-07-25 13:48:01.465961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.624 [2024-07-25 13:48:01.465975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.624 [2024-07-25 13:48:01.465989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.624 [2024-07-25 13:48:01.466003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.624 [2024-07-25 13:48:01.466023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.624 [2024-07-25 13:48:01.466036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x272df00 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.624 [2024-07-25 13:48:01.466653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.466863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893120 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:04.625 [2024-07-25 13:48:01.468214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2727f00 (9): Bad file descriptor 00:19:04.625 [2024-07-25 13:48:01.468458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.468999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.469011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.469023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.469035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.469051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.625 [2024-07-25 13:48:01.469073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18935e0 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.469842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.626 [2024-07-25 13:48:01.469876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2727f00 with addr=10.0.0.2, port=4420 00:19:04.626 [2024-07-25 13:48:01.469894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2727f00 is same with the state(5) to be set 00:19:04.626 [2024-07-25 13:48:01.470023] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:04.626 [2024-07-25 13:48:01.470136] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:04.626 [2024-07-25 13:48:01.470477] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:04.626 [2024-07-25 13:48:01.470511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2727f00 (9): Bad file descriptor 00:19:04.626 [2024-07-25 13:48:01.470646] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:04.626 [2024-07-25 13:48:01.470730] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:04.626 [2024-07-25 13:48:01.470885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:04.626 [2024-07-25 13:48:01.470907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:04.626 [2024-07-25 13:48:01.470924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:04.626 [2024-07-25 13:48:01.470979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.626 [2024-07-25 13:48:01.471351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.626 [2024-07-25 13:48:01.471365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.471981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.471995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.472011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.472024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.472040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.472054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.472080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.472098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.472115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.627 [2024-07-25 13:48:01.472129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.627 [2024-07-25 13:48:01.472144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.628 [2024-07-25 13:48:01.472878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.628 [2024-07-25 13:48:01.472891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.472907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.629 [2024-07-25 13:48:01.472920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.472934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2883020 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.473006] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2883020 was disconnected and freed. reset controller. 00:19:04.629 [2024-07-25 13:48:01.473273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.629 [2024-07-25 13:48:01.474586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.629 [2024-07-25 13:48:01.474619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26fd830 (9): Bad file descriptor 00:19:04.629 [2024-07-25 13:48:01.474726] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:04.629 [2024-07-25 13:48:01.475375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.629 [2024-07-25 13:48:01.475404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26fd830 with addr=10.0.0.2, port=4420 00:19:04.629 [2024-07-25 13:48:01.475421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26fd830 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.629 [2024-07-25 13:48:01.475481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.475496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.629 [2024-07-25 13:48:01.475509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.475523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.629 [2024-07-25 13:48:01.475536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.475550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.629 [2024-07-25 13:48:01.475563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.475562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with [2024-07-25 13:48:01.475576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28bed70 is same the state(5) to be set 00:19:04.629 with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27dfba0 (9): [2024-07-25 13:48:01.475613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with Bad file descriptor 00:19:04.629 the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with [2024-07-25 13:48:01.475675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:19:04.629 id:0 cdw10:00000000 cdw11:00000000 00:19:04.629 [2024-07-25 13:48:01.475694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.475707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.629 [2024-07-25 13:48:01.475720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.475733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.629 [2024-07-25 13:48:01.475746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.475760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.629 [2024-07-25 13:48:01.475772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.629 [2024-07-25 13:48:01.475785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e2660 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2721b50 (9): Bad file descriptor 00:19:04.629 [2024-07-25 13:48:01.475839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x272d4a0 (9): Bad file descriptor 00:19:04.629 [2024-07-25 13:48:01.475865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.629 [2024-07-25 13:48:01.475890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with [2024-07-25 13:48:01.475889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x272df00 (9): the state(5) to be set 00:19:04.629 Bad file descriptor 00:19:04.630 [2024-07-25 13:48:01.475905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.475918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.475930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.475942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with [2024-07-25 13:48:01.475939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:19:04.630 id:0 cdw10:00000000 cdw11:00000000 00:19:04.630 [2024-07-25 13:48:01.475957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.475961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.630 [2024-07-25 13:48:01.475970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.475976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.630 [2024-07-25 13:48:01.475983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.475990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.630 [2024-07-25 13:48:01.475996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.630 [2024-07-25 13:48:01.476009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.630 [2024-07-25 13:48:01.476022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.630 [2024-07-25 13:48:01.476035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.630 [2024-07-25 13:48:01.476051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ff610 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26fd830 (9): [2024-07-25 13:48:01.476286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with Bad file descriptor 00:19:04.630 the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893aa0 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.476457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.630 [2024-07-25 13:48:01.476478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:04.630 [2024-07-25 13:48:01.476492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.630 [2024-07-25 13:48:01.476648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.630 [2024-07-25 13:48:01.477096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.630 [2024-07-25 13:48:01.477246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477396] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:04.631 [2024-07-25 13:48:01.477410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.477879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893f60 is same with the state(5) to be set 00:19:04.631 [2024-07-25 13:48:01.478119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-25 13:48:01.478142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.631 [2024-07-25 13:48:01.478164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-25 13:48:01.478185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.631 [2024-07-25 13:48:01.478202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-25 13:48:01.478216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.631 [2024-07-25 13:48:01.478232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-25 13:48:01.478245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.631 [2024-07-25 13:48:01.478261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-25 13:48:01.478275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.631 [2024-07-25 13:48:01.478295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-25 13:48:01.478310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.631 [2024-07-25 13:48:01.478326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.631 [2024-07-25 13:48:01.478340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.631 [2024-07-25 13:48:01.478356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.478980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.478996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.632 [2024-07-25 13:48:01.479384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.632 [2024-07-25 13:48:01.479400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.479982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.479995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.480011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.480025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.480041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.480082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.480100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.480122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.480137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.633 [2024-07-25 13:48:01.480151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.633 [2024-07-25 13:48:01.480166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b4070 is same with the state(5) to be set 00:19:04.633 [2024-07-25 13:48:01.480238] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27b4070 was disconnected and freed. reset controller. 00:19:04.633 [2024-07-25 13:48:01.481477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:04.633 [2024-07-25 13:48:01.481539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28abe10 (9): Bad file descriptor 00:19:04.634 [2024-07-25 13:48:01.481614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:04.634 [2024-07-25 13:48:01.482070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.634 [2024-07-25 13:48:01.482100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28abe10 with addr=10.0.0.2, port=4420 00:19:04.634 [2024-07-25 13:48:01.482117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28abe10 is same with the state(5) to be set 00:19:04.634 [2024-07-25 13:48:01.482216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.634 [2024-07-25 13:48:01.482242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2727f00 with addr=10.0.0.2, port=4420 00:19:04.634 [2024-07-25 13:48:01.482257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2727f00 is same with the state(5) to be set 00:19:04.634 [2024-07-25 13:48:01.482325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28abe10 (9): Bad file descriptor 00:19:04.634 [2024-07-25 13:48:01.482358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2727f00 (9): Bad file descriptor 00:19:04.634 [2024-07-25 13:48:01.482422] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:04.634 [2024-07-25 13:48:01.482440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:04.634 [2024-07-25 13:48:01.482454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:04.634 [2024-07-25 13:48:01.482473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:04.634 [2024-07-25 13:48:01.482487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:04.634 [2024-07-25 13:48:01.482500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:04.634 [2024-07-25 13:48:01.482554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.634 [2024-07-25 13:48:01.482571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.634 [2024-07-25 13:48:01.485010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.634 [2024-07-25 13:48:01.485189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.634 [2024-07-25 13:48:01.485216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26fd830 with addr=10.0.0.2, port=4420 00:19:04.634 [2024-07-25 13:48:01.485232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26fd830 is same with the state(5) to be set 00:19:04.634 [2024-07-25 13:48:01.485290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26fd830 (9): Bad file descriptor 00:19:04.634 [2024-07-25 13:48:01.485351] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.634 [2024-07-25 13:48:01.485368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:04.634 [2024-07-25 13:48:01.485381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.634 [2024-07-25 13:48:01.485405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28bed70 (9): Bad file descriptor 00:19:04.634 [2024-07-25 13:48:01.485445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e2660 (9): Bad file descriptor 00:19:04.634 [2024-07-25 13:48:01.485496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff610 (9): Bad file descriptor 00:19:04.634 [2024-07-25 13:48:01.485572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.634 [2024-07-25 13:48:01.485655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.485973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.485988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.486017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.486047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.486087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.486122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.486157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.486187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.486217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.486247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.634 [2024-07-25 13:48:01.486276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.634 [2024-07-25 13:48:01.486290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.486985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.486999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.487029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.487066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.487109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.487139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.487169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.487200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.487230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.487259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.635 [2024-07-25 13:48:01.487289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.635 [2024-07-25 13:48:01.487305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.487657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.487672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28842e0 is same with the state(5) to be set 00:19:04.636 [2024-07-25 13:48:01.488938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.488967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.488988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.636 [2024-07-25 13:48:01.489592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.636 [2024-07-25 13:48:01.489607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.489983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.489997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.637 [2024-07-25 13:48:01.490603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.637 [2024-07-25 13:48:01.490617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.490911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.490925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b0fe0 is same with the state(5) to be set 00:19:04.638 [2024-07-25 13:48:01.492183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-07-25 13:48:01.492679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.638 [2024-07-25 13:48:01.492696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.492980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.492995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.639 [2024-07-25 13:48:01.493678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-07-25 13:48:01.493695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.493975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.493989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.494004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.494018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.494033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.494047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.494073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.494089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.494111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.494125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.494148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.494162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.494177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26f9000 is same with the state(5) to be set 00:19:04.640 [2024-07-25 13:48:01.495483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.640 [2024-07-25 13:48:01.495963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.640 [2024-07-25 13:48:01.495978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.495992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.496966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.496985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.641 [2024-07-25 13:48:01.497002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.641 [2024-07-25 13:48:01.497016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.497464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.497478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.503554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.503602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.503619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27a9290 is same with the state(5) to be set 00:19:04.642 [2024-07-25 13:48:01.505951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:04.642 [2024-07-25 13:48:01.505991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:04.642 [2024-07-25 13:48:01.506010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:04.642 [2024-07-25 13:48:01.506026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:04.642 [2024-07-25 13:48:01.506521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.642 [2024-07-25 13:48:01.506559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2721b50 with addr=10.0.0.2, port=4420 00:19:04.642 [2024-07-25 13:48:01.506578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2721b50 is same with the state(5) to be set 00:19:04.642 [2024-07-25 13:48:01.506666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.642 [2024-07-25 13:48:01.506693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x272df00 with addr=10.0.0.2, port=4420 00:19:04.642 [2024-07-25 13:48:01.506709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x272df00 is same with the state(5) to be set 00:19:04.642 [2024-07-25 13:48:01.506821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.642 [2024-07-25 13:48:01.506846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x272d4a0 with addr=10.0.0.2, port=4420 00:19:04.642 [2024-07-25 13:48:01.506862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x272d4a0 is same with the state(5) to be set 00:19:04.642 [2024-07-25 13:48:01.506939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.642 [2024-07-25 13:48:01.506964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27dfba0 with addr=10.0.0.2, port=4420 00:19:04.642 [2024-07-25 13:48:01.506980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27dfba0 is same with the state(5) to be set 00:19:04.642 [2024-07-25 13:48:01.507855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.507889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.507912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.507928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.507944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.507958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.507974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.507988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.508004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.508018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.508034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.508048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.508071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.508087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.508103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.508117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.508132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.508146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.642 [2024-07-25 13:48:01.508163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.642 [2024-07-25 13:48:01.508177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.508979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.508994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.509007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.509023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.509040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.509057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.509080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.509106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.509121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.509136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.643 [2024-07-25 13:48:01.509150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.643 [2024-07-25 13:48:01.509165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.509812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.509827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b1ef0 is same with the state(5) to be set 00:19:04.644 [2024-07-25 13:48:01.511107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.511132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.511153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.511169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.511185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.511199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.511215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.511229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.511245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.511258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.644 [2024-07-25 13:48:01.511274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.644 [2024-07-25 13:48:01.511288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.511983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.511999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.512013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.512029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.512043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.512071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.512089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.512105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.512119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.512135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.512150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.512166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.512181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.512196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.512210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.512226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.512244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.512260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.645 [2024-07-25 13:48:01.512274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.645 [2024-07-25 13:48:01.512290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.512968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.512984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.513001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.513017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.513031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.513047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.513068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.513084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2878dd0 is same with the state(5) to be set 00:19:04.646 [2024-07-25 13:48:01.514347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.514370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.514391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.514412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.514429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.514443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.514459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.514473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.514489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.514503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.514519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.646 [2024-07-25 13:48:01.514533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.646 [2024-07-25 13:48:01.514549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.514973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.514987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.647 [2024-07-25 13:48:01.515540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.647 [2024-07-25 13:48:01.515553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.515986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.515999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.648 [2024-07-25 13:48:01.516314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.648 [2024-07-25 13:48:01.516329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b2cd0 is same with the state(5) to be set 00:19:04.648 [2024-07-25 13:48:01.518257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:04.648 [2024-07-25 13:48:01.518291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:04.648 [2024-07-25 13:48:01.518310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.648 [2024-07-25 13:48:01.518328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:04.648 [2024-07-25 13:48:01.518350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:04.648 task offset: 24832 on job bdev=Nvme3n1 fails 00:19:04.648 00:19:04.648 Latency(us) 00:19:04.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.648 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.648 Job: Nvme1n1 ended in about 0.93 seconds with error 00:19:04.648 Verification LBA range: start 0x0 length 0x400 00:19:04.648 Nvme1n1 : 0.93 171.49 10.72 68.59 0.00 263594.94 11942.12 267192.70 00:19:04.648 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.648 Job: Nvme2n1 ended in about 0.95 seconds with error 00:19:04.648 Verification LBA range: start 0x0 length 0x400 00:19:04.648 Nvme2n1 : 0.95 202.66 12.67 67.55 0.00 229594.26 17961.72 256318.58 00:19:04.648 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.648 Job: Nvme3n1 ended in about 0.93 seconds with error 00:19:04.648 Verification LBA range: start 0x0 length 0x400 00:19:04.648 Nvme3n1 : 0.93 207.34 12.96 69.11 0.00 219659.61 4708.88 257872.02 00:19:04.648 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.648 Job: Nvme4n1 ended in about 0.95 seconds with error 00:19:04.648 Verification LBA range: start 0x0 length 0x400 00:19:04.648 Nvme4n1 : 0.95 201.97 12.62 67.32 0.00 221126.54 18155.90 264085.81 00:19:04.649 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.649 Job: Nvme5n1 ended in about 0.95 seconds with error 00:19:04.649 Verification LBA range: start 0x0 length 0x400 00:19:04.649 Nvme5n1 : 0.95 138.38 8.65 67.09 0.00 283895.96 18544.26 260978.92 00:19:04.649 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.649 Job: Nvme6n1 ended in about 0.97 seconds with error 00:19:04.649 Verification LBA range: start 0x0 length 0x400 00:19:04.649 Nvme6n1 : 0.97 132.02 8.25 66.01 0.00 288944.48 21359.88 259425.47 00:19:04.649 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.649 Job: Nvme7n1 ended in about 0.97 seconds with error 00:19:04.649 Verification LBA range: start 0x0 length 0x400 00:19:04.649 Nvme7n1 : 0.97 131.58 8.22 65.79 0.00 284157.22 18350.08 262532.36 00:19:04.649 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.649 Job: Nvme8n1 ended in about 0.98 seconds with error 00:19:04.649 Verification LBA range: start 0x0 length 0x400 00:19:04.649 Nvme8n1 : 0.98 196.72 12.30 65.57 0.00 209399.85 19709.35 259425.47 00:19:04.649 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.649 Job: Nvme9n1 ended in about 0.94 seconds with error 00:19:04.649 Verification LBA range: start 0x0 length 0x400 00:19:04.649 Nvme9n1 : 0.94 136.16 8.51 68.08 0.00 261419.61 23398.78 271853.04 00:19:04.649 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:04.649 Job: Nvme10n1 ended in about 0.96 seconds with error 00:19:04.649 Verification LBA range: start 0x0 length 0x400 00:19:04.649 Nvme10n1 : 0.96 132.87 8.30 66.43 0.00 263132.60 22330.79 285834.05 00:19:04.649 =================================================================================================================== 00:19:04.649 Total : 1651.19 103.20 671.57 0.00 248943.12 4708.88 285834.05 00:19:04.649 [2024-07-25 13:48:01.544481] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:04.649 [2024-07-25 13:48:01.544647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2721b50 (9): Bad file descriptor 00:19:04.649 [2024-07-25 13:48:01.544682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x272df00 (9): Bad file descriptor 00:19:04.649 [2024-07-25 13:48:01.544701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x272d4a0 (9): Bad file descriptor 00:19:04.649 [2024-07-25 13:48:01.544718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27dfba0 (9): Bad file descriptor 00:19:04.649 [2024-07-25 13:48:01.544780] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:04.649 [2024-07-25 13:48:01.544806] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:04.649 [2024-07-25 13:48:01.544826] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:04.649 [2024-07-25 13:48:01.544843] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:04.649 [2024-07-25 13:48:01.544861] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:04.649 [2024-07-25 13:48:01.545001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:04.649 [2024-07-25 13:48:01.545302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.649 [2024-07-25 13:48:01.545338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2727f00 with addr=10.0.0.2, port=4420 00:19:04.649 [2024-07-25 13:48:01.545366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2727f00 is same with the state(5) to be set 00:19:04.649 [2024-07-25 13:48:01.545456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.649 [2024-07-25 13:48:01.545482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28abe10 with addr=10.0.0.2, port=4420 00:19:04.649 [2024-07-25 13:48:01.545499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28abe10 is same with the state(5) to be set 00:19:04.649 [2024-07-25 13:48:01.545586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.649 [2024-07-25 13:48:01.545612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26fd830 with addr=10.0.0.2, port=4420 00:19:04.649 [2024-07-25 13:48:01.545637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26fd830 is same with the state(5) to be set 00:19:04.649 [2024-07-25 13:48:01.545723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.649 [2024-07-25 13:48:01.545749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27e2660 with addr=10.0.0.2, port=4420 00:19:04.649 [2024-07-25 13:48:01.545765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27e2660 is same with the state(5) to be set 00:19:04.649 [2024-07-25 13:48:01.545876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.649 [2024-07-25 13:48:01.545902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ff610 with addr=10.0.0.2, port=4420 00:19:04.649 [2024-07-25 13:48:01.545918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ff610 is same with the state(5) to be set 00:19:04.649 [2024-07-25 13:48:01.545934] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:04.649 [2024-07-25 13:48:01.545947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:04.649 [2024-07-25 13:48:01.545964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:04.649 [2024-07-25 13:48:01.545985] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:04.649 [2024-07-25 13:48:01.545999] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:04.649 [2024-07-25 13:48:01.546013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:04.649 [2024-07-25 13:48:01.546030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:04.649 [2024-07-25 13:48:01.546044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:04.649 [2024-07-25 13:48:01.546078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:04.649 [2024-07-25 13:48:01.546096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:04.649 [2024-07-25 13:48:01.546110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:04.649 [2024-07-25 13:48:01.546122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:04.649 [2024-07-25 13:48:01.546157] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:04.649 [2024-07-25 13:48:01.546182] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:04.649 [2024-07-25 13:48:01.546201] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:04.649 [2024-07-25 13:48:01.546221] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:04.649 [2024-07-25 13:48:01.547146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.649 [2024-07-25 13:48:01.547170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.649 [2024-07-25 13:48:01.547183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.649 [2024-07-25 13:48:01.547194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.649 [2024-07-25 13:48:01.547328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:04.650 [2024-07-25 13:48:01.547355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28bed70 with addr=10.0.0.2, port=4420 00:19:04.650 [2024-07-25 13:48:01.547371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28bed70 is same with the state(5) to be set 00:19:04.650 [2024-07-25 13:48:01.547395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2727f00 (9): Bad file descriptor 00:19:04.650 [2024-07-25 13:48:01.547416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28abe10 (9): Bad file descriptor 00:19:04.650 [2024-07-25 13:48:01.547442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26fd830 (9): Bad file descriptor 00:19:04.650 [2024-07-25 13:48:01.547459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27e2660 (9): Bad file descriptor 00:19:04.650 [2024-07-25 13:48:01.547476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff610 (9): Bad file descriptor 00:19:04.650 [2024-07-25 13:48:01.547549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28bed70 (9): Bad file descriptor 00:19:04.650 [2024-07-25 13:48:01.547571] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:04.650 [2024-07-25 13:48:01.547585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:04.650 [2024-07-25 13:48:01.547598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:04.650 [2024-07-25 13:48:01.547615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:04.650 [2024-07-25 13:48:01.547629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:04.650 [2024-07-25 13:48:01.547642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:04.650 [2024-07-25 13:48:01.547657] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.650 [2024-07-25 13:48:01.547671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:04.650 [2024-07-25 13:48:01.547684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.650 [2024-07-25 13:48:01.547701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:04.650 [2024-07-25 13:48:01.547713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:04.650 [2024-07-25 13:48:01.547726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:04.650 [2024-07-25 13:48:01.547742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:04.650 [2024-07-25 13:48:01.547755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:04.650 [2024-07-25 13:48:01.547767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:04.650 [2024-07-25 13:48:01.547827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.650 [2024-07-25 13:48:01.547846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.650 [2024-07-25 13:48:01.547858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.650 [2024-07-25 13:48:01.547869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.650 [2024-07-25 13:48:01.547881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.650 [2024-07-25 13:48:01.547892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:04.650 [2024-07-25 13:48:01.547905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:04.650 [2024-07-25 13:48:01.547918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:04.650 [2024-07-25 13:48:01.547955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:05.248 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:05.248 13:48:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 604645 00:19:06.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (604645) - No such process 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.188 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.188 rmmod nvme_tcp 00:19:06.188 rmmod nvme_fabrics 00:19:06.188 rmmod nvme_keyring 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.189 13:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.729 00:19:08.729 real 0m7.482s 00:19:08.729 user 0m18.200s 00:19:08.729 sys 0m1.460s 00:19:08.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 ************************************ 00:19:08.729 END TEST nvmf_shutdown_tc3 00:19:08.729 ************************************ 00:19:08.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:08.729 00:19:08.729 real 0m27.474s 00:19:08.729 user 1m16.460s 00:19:08.729 sys 0m6.340s 00:19:08.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.729 13:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 ************************************ 00:19:08.729 END TEST nvmf_shutdown 00:19:08.729 ************************************ 00:19:08.729 13:48:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:19:08.729 00:19:08.729 real 10m20.756s 00:19:08.729 user 24m40.117s 00:19:08.729 sys 2m31.680s 00:19:08.729 13:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.729 13:48:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 ************************************ 00:19:08.729 END TEST nvmf_target_extra 00:19:08.729 ************************************ 00:19:08.729 13:48:05 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:08.729 13:48:05 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:08.729 13:48:05 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.729 13:48:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 ************************************ 00:19:08.729 START TEST nvmf_host 00:19:08.729 ************************************ 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:08.729 * Looking for test storage... 00:19:08.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.729 ************************************ 00:19:08.729 START TEST nvmf_multicontroller 00:19:08.729 ************************************ 00:19:08.729 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:08.730 * Looking for test storage... 00:19:08.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.730 13:48:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:10.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.636 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:10.637 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:10.637 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:10.637 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:10.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:19:10.637 00:19:10.637 --- 10.0.0.2 ping statistics --- 00:19:10.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.637 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:19:10.637 00:19:10.637 --- 10.0.0.1 ping statistics --- 00:19:10.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.637 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=607553 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 607553 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 607553 ']' 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.637 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:10.637 [2024-07-25 13:48:07.670372] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:10.637 [2024-07-25 13:48:07.670473] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.897 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.897 [2024-07-25 13:48:07.737756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:10.897 [2024-07-25 13:48:07.847792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.897 [2024-07-25 13:48:07.847858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.897 [2024-07-25 13:48:07.847881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.897 [2024-07-25 13:48:07.847893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.897 [2024-07-25 13:48:07.847903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.897 [2024-07-25 13:48:07.848015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.897 [2024-07-25 13:48:07.848091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.897 [2024-07-25 13:48:07.848095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.157 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.157 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:19:11.157 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:11.157 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:11.157 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.157 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.157 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 [2024-07-25 13:48:07.995231] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 Malloc0 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 [2024-07-25 13:48:08.059117] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 [2024-07-25 13:48:08.066952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 Malloc1 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=607703 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 607703 /var/tmp/bdevperf.sock 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 607703 ']' 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.157 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.724 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.724 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:19:11.724 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:11.724 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.724 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.724 NVMe0n1 00:19:11.724 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.724 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:11.724 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.725 1 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.725 request: 00:19:11.725 { 00:19:11.725 "name": "NVMe0", 00:19:11.725 "trtype": "tcp", 00:19:11.725 "traddr": "10.0.0.2", 00:19:11.725 "adrfam": "ipv4", 00:19:11.725 "trsvcid": "4420", 00:19:11.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.725 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:11.725 "hostaddr": "10.0.0.2", 00:19:11.725 "hostsvcid": "60000", 00:19:11.725 "prchk_reftag": false, 00:19:11.725 "prchk_guard": false, 00:19:11.725 "hdgst": false, 00:19:11.725 "ddgst": false, 00:19:11.725 "method": "bdev_nvme_attach_controller", 00:19:11.725 "req_id": 1 00:19:11.725 } 00:19:11.725 Got JSON-RPC error response 00:19:11.725 response: 00:19:11.725 { 00:19:11.725 "code": -114, 00:19:11.725 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:11.725 } 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.725 request: 00:19:11.725 { 00:19:11.725 "name": "NVMe0", 00:19:11.725 "trtype": "tcp", 00:19:11.725 "traddr": "10.0.0.2", 00:19:11.725 "adrfam": "ipv4", 00:19:11.725 "trsvcid": "4420", 00:19:11.725 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:11.725 "hostaddr": "10.0.0.2", 00:19:11.725 "hostsvcid": "60000", 00:19:11.725 "prchk_reftag": false, 00:19:11.725 "prchk_guard": false, 00:19:11.725 "hdgst": false, 00:19:11.725 "ddgst": false, 00:19:11.725 "method": "bdev_nvme_attach_controller", 00:19:11.725 "req_id": 1 00:19:11.725 } 00:19:11.725 Got JSON-RPC error response 00:19:11.725 response: 00:19:11.725 { 00:19:11.725 "code": -114, 00:19:11.725 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:11.725 } 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.725 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.726 request: 00:19:11.726 { 00:19:11.726 "name": "NVMe0", 00:19:11.726 "trtype": "tcp", 00:19:11.726 "traddr": "10.0.0.2", 00:19:11.726 "adrfam": "ipv4", 00:19:11.726 "trsvcid": "4420", 00:19:11.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.726 "hostaddr": "10.0.0.2", 00:19:11.726 "hostsvcid": "60000", 00:19:11.726 "prchk_reftag": false, 00:19:11.726 "prchk_guard": false, 00:19:11.726 "hdgst": false, 00:19:11.726 "ddgst": false, 00:19:11.726 "multipath": "disable", 00:19:11.726 "method": "bdev_nvme_attach_controller", 00:19:11.726 "req_id": 1 00:19:11.726 } 00:19:11.726 Got JSON-RPC error response 00:19:11.726 response: 00:19:11.726 { 00:19:11.726 "code": -114, 00:19:11.726 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:19:11.726 } 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.726 request: 00:19:11.726 { 00:19:11.726 "name": "NVMe0", 00:19:11.726 "trtype": "tcp", 00:19:11.726 "traddr": "10.0.0.2", 00:19:11.726 "adrfam": "ipv4", 00:19:11.726 "trsvcid": "4420", 00:19:11.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.726 "hostaddr": "10.0.0.2", 00:19:11.726 "hostsvcid": "60000", 00:19:11.726 "prchk_reftag": false, 00:19:11.726 "prchk_guard": false, 00:19:11.726 "hdgst": false, 00:19:11.726 "ddgst": false, 00:19:11.726 "multipath": "failover", 00:19:11.726 "method": "bdev_nvme_attach_controller", 00:19:11.726 "req_id": 1 00:19:11.726 } 00:19:11.726 Got JSON-RPC error response 00:19:11.726 response: 00:19:11.726 { 00:19:11.726 "code": -114, 00:19:11.726 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:11.726 } 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.726 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.726 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.985 00:19:11.985 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.985 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:11.985 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.985 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:11.985 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:11.985 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.985 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:11.985 13:48:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:12.920 0 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 607703 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 607703 ']' 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 607703 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.920 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 607703 00:19:13.178 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.178 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.178 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 607703' 00:19:13.178 killing process with pid 607703 00:19:13.178 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 607703 00:19:13.178 13:48:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 607703 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:19:13.438 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:19:13.438 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:13.438 [2024-07-25 13:48:08.172899] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:13.438 [2024-07-25 13:48:08.172981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607703 ] 00:19:13.438 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.438 [2024-07-25 13:48:08.234668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.438 [2024-07-25 13:48:08.343831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.438 [2024-07-25 13:48:08.768713] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name c11ddf4d-ba7a-4ff3-a3c0-cb3756320c61 already exists 00:19:13.438 [2024-07-25 13:48:08.768754] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:c11ddf4d-ba7a-4ff3-a3c0-cb3756320c61 alias for bdev NVMe1n1 00:19:13.438 [2024-07-25 13:48:08.768785] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:13.438 Running I/O for 1 seconds... 00:19:13.438 00:19:13.438 Latency(us) 00:19:13.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.438 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:13.439 NVMe0n1 : 1.00 18652.11 72.86 0.00 0.00 6844.40 3519.53 12281.93 00:19:13.439 =================================================================================================================== 00:19:13.439 Total : 18652.11 72.86 0.00 0.00 6844.40 3519.53 12281.93 00:19:13.439 Received shutdown signal, test time was about 1.000000 seconds 00:19:13.439 00:19:13.439 Latency(us) 00:19:13.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.439 =================================================================================================================== 00:19:13.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.439 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:13.439 rmmod nvme_tcp 00:19:13.439 rmmod nvme_fabrics 00:19:13.439 rmmod nvme_keyring 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 607553 ']' 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 607553 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 607553 ']' 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 607553 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 607553 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 607553' 00:19:13.439 killing process with pid 607553 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 607553 00:19:13.439 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 607553 00:19:13.699 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:13.699 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:13.699 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:13.699 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:13.699 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:13.699 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.699 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.699 13:48:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:16.235 00:19:16.235 real 0m7.357s 00:19:16.235 user 0m11.229s 00:19:16.235 sys 0m2.289s 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:16.235 ************************************ 00:19:16.235 END TEST nvmf_multicontroller 00:19:16.235 ************************************ 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.235 ************************************ 00:19:16.235 START TEST nvmf_aer 00:19:16.235 ************************************ 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:16.235 * Looking for test storage... 00:19:16.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.235 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.236 13:48:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:18.147 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:18.147 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:18.147 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:18.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:18.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:18.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:19:18.148 00:19:18.148 --- 10.0.0.2 ping statistics --- 00:19:18.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.148 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:19:18.148 00:19:18.148 --- 10.0.0.1 ping statistics --- 00:19:18.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.148 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:18.148 13:48:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=610059 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 610059 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 610059 ']' 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.148 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.148 [2024-07-25 13:48:15.052554] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:18.148 [2024-07-25 13:48:15.052638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.148 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.148 [2024-07-25 13:48:15.113811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.409 [2024-07-25 13:48:15.216184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.409 [2024-07-25 13:48:15.216239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.409 [2024-07-25 13:48:15.216268] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.409 [2024-07-25 13:48:15.216279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.409 [2024-07-25 13:48:15.216288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.409 [2024-07-25 13:48:15.216341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.409 [2024-07-25 13:48:15.216401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.409 [2024-07-25 13:48:15.216465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.409 [2024-07-25 13:48:15.216468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.409 [2024-07-25 13:48:15.372292] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.409 Malloc0 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.409 [2024-07-25 13:48:15.425823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.409 [ 00:19:18.409 { 00:19:18.409 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:18.409 "subtype": "Discovery", 00:19:18.409 "listen_addresses": [], 00:19:18.409 "allow_any_host": true, 00:19:18.409 "hosts": [] 00:19:18.409 }, 00:19:18.409 { 00:19:18.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.409 "subtype": "NVMe", 00:19:18.409 "listen_addresses": [ 00:19:18.409 { 00:19:18.409 "trtype": "TCP", 00:19:18.409 "adrfam": "IPv4", 00:19:18.409 "traddr": "10.0.0.2", 00:19:18.409 "trsvcid": "4420" 00:19:18.409 } 00:19:18.409 ], 00:19:18.409 "allow_any_host": true, 00:19:18.409 "hosts": [], 00:19:18.409 "serial_number": "SPDK00000000000001", 00:19:18.409 "model_number": "SPDK bdev Controller", 00:19:18.409 "max_namespaces": 2, 00:19:18.409 "min_cntlid": 1, 00:19:18.409 "max_cntlid": 65519, 00:19:18.409 "namespaces": [ 00:19:18.409 { 00:19:18.409 "nsid": 1, 00:19:18.409 "bdev_name": "Malloc0", 00:19:18.409 "name": "Malloc0", 00:19:18.409 "nguid": "88C88891D8EB4C37B3F03399906B6482", 00:19:18.409 "uuid": "88c88891-d8eb-4c37-b3f0-3399906b6482" 00:19:18.409 } 00:19:18.409 ] 00:19:18.409 } 00:19:18.409 ] 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:18.409 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=610199 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:18.669 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.669 Malloc1 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:18.669 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.670 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.928 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.928 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:18.928 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.928 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.928 [ 00:19:18.928 { 00:19:18.928 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:18.928 "subtype": "Discovery", 00:19:18.928 "listen_addresses": [], 00:19:18.928 "allow_any_host": true, 00:19:18.928 "hosts": [] 00:19:18.928 }, 00:19:18.928 { 00:19:18.928 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.928 "subtype": "NVMe", 00:19:18.928 "listen_addresses": [ 00:19:18.928 { 00:19:18.928 "trtype": "TCP", 00:19:18.928 "adrfam": "IPv4", 00:19:18.928 "traddr": "10.0.0.2", 00:19:18.928 "trsvcid": "4420" 00:19:18.928 } 00:19:18.928 ], 00:19:18.928 "allow_any_host": true, 00:19:18.928 "hosts": [], 00:19:18.928 "serial_number": "SPDK00000000000001", 00:19:18.928 "model_number": "SPDK bdev Controller", 00:19:18.928 "max_namespaces": 2, 00:19:18.928 "min_cntlid": 1, 00:19:18.928 "max_cntlid": 65519, 00:19:18.928 "namespaces": [ 00:19:18.928 { 00:19:18.928 "nsid": 1, 00:19:18.928 "bdev_name": "Malloc0", 00:19:18.928 "name": "Malloc0", 00:19:18.928 "nguid": "88C88891D8EB4C37B3F03399906B6482", 00:19:18.928 "uuid": "88c88891-d8eb-4c37-b3f0-3399906b6482" 00:19:18.928 }, 00:19:18.928 { 00:19:18.928 "nsid": 2, 00:19:18.928 "bdev_name": "Malloc1", 00:19:18.928 "name": "Malloc1", 00:19:18.928 "nguid": "7819CF4DFE08466F9C90D4D2C6130587", 00:19:18.928 "uuid": "7819cf4d-fe08-466f-9c90-d4d2c6130587" 00:19:18.928 } 00:19:18.928 ] 00:19:18.928 } 00:19:18.928 ] 00:19:18.928 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.928 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 610199 00:19:18.928 Asynchronous Event Request test 00:19:18.928 Attaching to 10.0.0.2 00:19:18.928 Attached to 10.0.0.2 00:19:18.928 Registering asynchronous event callbacks... 00:19:18.928 Starting namespace attribute notice tests for all controllers... 00:19:18.928 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:18.928 aer_cb - Changed Namespace 00:19:18.928 Cleaning up... 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.929 rmmod nvme_tcp 00:19:18.929 rmmod nvme_fabrics 00:19:18.929 rmmod nvme_keyring 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 610059 ']' 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 610059 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 610059 ']' 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 610059 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 610059 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 610059' 00:19:18.929 killing process with pid 610059 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 610059 00:19:18.929 13:48:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 610059 00:19:19.187 13:48:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:19.187 13:48:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:19.187 13:48:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:19.187 13:48:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.187 13:48:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.187 13:48:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.187 13:48:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.187 13:48:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.724 00:19:21.724 real 0m5.405s 00:19:21.724 user 0m4.285s 00:19:21.724 sys 0m1.904s 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:21.724 ************************************ 00:19:21.724 END TEST nvmf_aer 00:19:21.724 ************************************ 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.724 ************************************ 00:19:21.724 START TEST nvmf_async_init 00:19:21.724 ************************************ 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:21.724 * Looking for test storage... 00:19:21.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.724 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3b749aec33eb4ef991e9af759351701c 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:19:21.725 13:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:23.674 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:23.674 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:23.674 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:23.674 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.674 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:19:23.675 00:19:23.675 --- 10.0.0.2 ping statistics --- 00:19:23.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.675 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:19:23.675 00:19:23.675 --- 10.0.0.1 ping statistics --- 00:19:23.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.675 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=612142 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 612142 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 612142 ']' 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.675 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.675 [2024-07-25 13:48:20.592713] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:23.675 [2024-07-25 13:48:20.592812] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.675 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.675 [2024-07-25 13:48:20.658103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.933 [2024-07-25 13:48:20.767752] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.933 [2024-07-25 13:48:20.767812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.933 [2024-07-25 13:48:20.767840] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.933 [2024-07-25 13:48:20.767851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.933 [2024-07-25 13:48:20.767861] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.933 [2024-07-25 13:48:20.767895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.933 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.934 [2024-07-25 13:48:20.909501] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.934 null0 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3b749aec33eb4ef991e9af759351701c 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:23.934 [2024-07-25 13:48:20.949763] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.934 13:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.194 nvme0n1 00:19:24.194 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.194 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:24.194 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.194 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.194 [ 00:19:24.194 { 00:19:24.194 "name": "nvme0n1", 00:19:24.194 "aliases": [ 00:19:24.194 "3b749aec-33eb-4ef9-91e9-af759351701c" 00:19:24.194 ], 00:19:24.194 "product_name": "NVMe disk", 00:19:24.194 "block_size": 512, 00:19:24.194 "num_blocks": 2097152, 00:19:24.194 "uuid": "3b749aec-33eb-4ef9-91e9-af759351701c", 00:19:24.194 "assigned_rate_limits": { 00:19:24.194 "rw_ios_per_sec": 0, 00:19:24.194 "rw_mbytes_per_sec": 0, 00:19:24.194 "r_mbytes_per_sec": 0, 00:19:24.194 "w_mbytes_per_sec": 0 00:19:24.194 }, 00:19:24.194 "claimed": false, 00:19:24.194 "zoned": false, 00:19:24.194 "supported_io_types": { 00:19:24.194 "read": true, 00:19:24.194 "write": true, 00:19:24.194 "unmap": false, 00:19:24.194 "flush": true, 00:19:24.194 "reset": true, 00:19:24.194 "nvme_admin": true, 00:19:24.194 "nvme_io": true, 00:19:24.194 "nvme_io_md": false, 00:19:24.194 "write_zeroes": true, 00:19:24.194 "zcopy": false, 00:19:24.194 "get_zone_info": false, 00:19:24.194 "zone_management": false, 00:19:24.194 "zone_append": false, 00:19:24.194 "compare": true, 00:19:24.194 "compare_and_write": true, 00:19:24.194 "abort": true, 00:19:24.194 "seek_hole": false, 00:19:24.194 "seek_data": false, 00:19:24.194 "copy": true, 00:19:24.194 "nvme_iov_md": false 00:19:24.194 }, 00:19:24.194 "memory_domains": [ 00:19:24.194 { 00:19:24.194 "dma_device_id": "system", 00:19:24.194 "dma_device_type": 1 00:19:24.194 } 00:19:24.194 ], 00:19:24.194 "driver_specific": { 00:19:24.194 "nvme": [ 00:19:24.194 { 00:19:24.194 "trid": { 00:19:24.194 "trtype": "TCP", 00:19:24.194 "adrfam": "IPv4", 00:19:24.194 "traddr": "10.0.0.2", 00:19:24.194 "trsvcid": "4420", 00:19:24.194 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:24.194 }, 00:19:24.194 "ctrlr_data": { 00:19:24.194 "cntlid": 1, 00:19:24.194 "vendor_id": "0x8086", 00:19:24.194 "model_number": "SPDK bdev Controller", 00:19:24.194 "serial_number": "00000000000000000000", 00:19:24.194 "firmware_revision": "24.09", 00:19:24.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:24.194 "oacs": { 00:19:24.194 "security": 0, 00:19:24.194 "format": 0, 00:19:24.194 "firmware": 0, 00:19:24.194 "ns_manage": 0 00:19:24.194 }, 00:19:24.194 "multi_ctrlr": true, 00:19:24.194 "ana_reporting": false 00:19:24.194 }, 00:19:24.194 "vs": { 00:19:24.194 "nvme_version": "1.3" 00:19:24.194 }, 00:19:24.194 "ns_data": { 00:19:24.194 "id": 1, 00:19:24.194 "can_share": true 00:19:24.194 } 00:19:24.194 } 00:19:24.194 ], 00:19:24.194 "mp_policy": "active_passive" 00:19:24.194 } 00:19:24.194 } 00:19:24.194 ] 00:19:24.194 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.194 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:24.194 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.194 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.194 [2024-07-25 13:48:21.199157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:24.194 [2024-07-25 13:48:21.199258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204b1d0 (9): Bad file descriptor 00:19:24.455 [2024-07-25 13:48:21.331215] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:24.455 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.455 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:24.455 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.455 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.455 [ 00:19:24.455 { 00:19:24.455 "name": "nvme0n1", 00:19:24.455 "aliases": [ 00:19:24.455 "3b749aec-33eb-4ef9-91e9-af759351701c" 00:19:24.455 ], 00:19:24.455 "product_name": "NVMe disk", 00:19:24.455 "block_size": 512, 00:19:24.455 "num_blocks": 2097152, 00:19:24.455 "uuid": "3b749aec-33eb-4ef9-91e9-af759351701c", 00:19:24.455 "assigned_rate_limits": { 00:19:24.455 "rw_ios_per_sec": 0, 00:19:24.455 "rw_mbytes_per_sec": 0, 00:19:24.455 "r_mbytes_per_sec": 0, 00:19:24.455 "w_mbytes_per_sec": 0 00:19:24.455 }, 00:19:24.455 "claimed": false, 00:19:24.455 "zoned": false, 00:19:24.455 "supported_io_types": { 00:19:24.455 "read": true, 00:19:24.455 "write": true, 00:19:24.455 "unmap": false, 00:19:24.455 "flush": true, 00:19:24.455 "reset": true, 00:19:24.455 "nvme_admin": true, 00:19:24.455 "nvme_io": true, 00:19:24.455 "nvme_io_md": false, 00:19:24.455 "write_zeroes": true, 00:19:24.455 "zcopy": false, 00:19:24.455 "get_zone_info": false, 00:19:24.455 "zone_management": false, 00:19:24.455 "zone_append": false, 00:19:24.456 "compare": true, 00:19:24.456 "compare_and_write": true, 00:19:24.456 "abort": true, 00:19:24.456 "seek_hole": false, 00:19:24.456 "seek_data": false, 00:19:24.456 "copy": true, 00:19:24.456 "nvme_iov_md": false 00:19:24.456 }, 00:19:24.456 "memory_domains": [ 00:19:24.456 { 00:19:24.456 "dma_device_id": "system", 00:19:24.456 "dma_device_type": 1 00:19:24.456 } 00:19:24.456 ], 00:19:24.456 "driver_specific": { 00:19:24.456 "nvme": [ 00:19:24.456 { 00:19:24.456 "trid": { 00:19:24.456 "trtype": "TCP", 00:19:24.456 "adrfam": "IPv4", 00:19:24.456 "traddr": "10.0.0.2", 00:19:24.456 "trsvcid": "4420", 00:19:24.456 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:24.456 }, 00:19:24.456 "ctrlr_data": { 00:19:24.456 "cntlid": 2, 00:19:24.456 "vendor_id": "0x8086", 00:19:24.456 "model_number": "SPDK bdev Controller", 00:19:24.456 "serial_number": "00000000000000000000", 00:19:24.456 "firmware_revision": "24.09", 00:19:24.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:24.456 "oacs": { 00:19:24.456 "security": 0, 00:19:24.456 "format": 0, 00:19:24.456 "firmware": 0, 00:19:24.456 "ns_manage": 0 00:19:24.456 }, 00:19:24.456 "multi_ctrlr": true, 00:19:24.456 "ana_reporting": false 00:19:24.456 }, 00:19:24.456 "vs": { 00:19:24.456 "nvme_version": "1.3" 00:19:24.456 }, 00:19:24.456 "ns_data": { 00:19:24.456 "id": 1, 00:19:24.456 "can_share": true 00:19:24.456 } 00:19:24.456 } 00:19:24.456 ], 00:19:24.456 "mp_policy": "active_passive" 00:19:24.456 } 00:19:24.456 } 00:19:24.456 ] 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ffgSUU8GmJ 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ffgSUU8GmJ 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.456 [2024-07-25 13:48:21.375725] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.456 [2024-07-25 13:48:21.375892] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ffgSUU8GmJ 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.456 [2024-07-25 13:48:21.383740] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ffgSUU8GmJ 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.456 [2024-07-25 13:48:21.391764] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.456 [2024-07-25 13:48:21.391838] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:24.456 nvme0n1 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.456 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.456 [ 00:19:24.456 { 00:19:24.456 "name": "nvme0n1", 00:19:24.456 "aliases": [ 00:19:24.456 "3b749aec-33eb-4ef9-91e9-af759351701c" 00:19:24.456 ], 00:19:24.456 "product_name": "NVMe disk", 00:19:24.456 "block_size": 512, 00:19:24.456 "num_blocks": 2097152, 00:19:24.456 "uuid": "3b749aec-33eb-4ef9-91e9-af759351701c", 00:19:24.456 "assigned_rate_limits": { 00:19:24.456 "rw_ios_per_sec": 0, 00:19:24.456 "rw_mbytes_per_sec": 0, 00:19:24.456 "r_mbytes_per_sec": 0, 00:19:24.456 "w_mbytes_per_sec": 0 00:19:24.456 }, 00:19:24.456 "claimed": false, 00:19:24.456 "zoned": false, 00:19:24.456 "supported_io_types": { 00:19:24.456 "read": true, 00:19:24.456 "write": true, 00:19:24.456 "unmap": false, 00:19:24.456 "flush": true, 00:19:24.456 "reset": true, 00:19:24.456 "nvme_admin": true, 00:19:24.456 "nvme_io": true, 00:19:24.456 "nvme_io_md": false, 00:19:24.456 "write_zeroes": true, 00:19:24.456 "zcopy": false, 00:19:24.456 "get_zone_info": false, 00:19:24.456 "zone_management": false, 00:19:24.456 "zone_append": false, 00:19:24.456 "compare": true, 00:19:24.456 "compare_and_write": true, 00:19:24.456 "abort": true, 00:19:24.456 "seek_hole": false, 00:19:24.456 "seek_data": false, 00:19:24.456 "copy": true, 00:19:24.456 "nvme_iov_md": false 00:19:24.456 }, 00:19:24.456 "memory_domains": [ 00:19:24.456 { 00:19:24.456 "dma_device_id": "system", 00:19:24.456 "dma_device_type": 1 00:19:24.456 } 00:19:24.456 ], 00:19:24.456 "driver_specific": { 00:19:24.456 "nvme": [ 00:19:24.457 { 00:19:24.457 "trid": { 00:19:24.457 "trtype": "TCP", 00:19:24.457 "adrfam": "IPv4", 00:19:24.457 "traddr": "10.0.0.2", 00:19:24.457 "trsvcid": "4421", 00:19:24.457 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:24.457 }, 00:19:24.457 "ctrlr_data": { 00:19:24.457 "cntlid": 3, 00:19:24.457 "vendor_id": "0x8086", 00:19:24.457 "model_number": "SPDK bdev Controller", 00:19:24.457 "serial_number": "00000000000000000000", 00:19:24.457 "firmware_revision": "24.09", 00:19:24.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:24.457 "oacs": { 00:19:24.457 "security": 0, 00:19:24.457 "format": 0, 00:19:24.457 "firmware": 0, 00:19:24.457 "ns_manage": 0 00:19:24.457 }, 00:19:24.457 "multi_ctrlr": true, 00:19:24.457 "ana_reporting": false 00:19:24.457 }, 00:19:24.457 "vs": { 00:19:24.457 "nvme_version": "1.3" 00:19:24.457 }, 00:19:24.457 "ns_data": { 00:19:24.457 "id": 1, 00:19:24.457 "can_share": true 00:19:24.457 } 00:19:24.457 } 00:19:24.457 ], 00:19:24.457 "mp_policy": "active_passive" 00:19:24.457 } 00:19:24.457 } 00:19:24.457 ] 00:19:24.457 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.457 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.457 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.457 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ffgSUU8GmJ 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:24.717 rmmod nvme_tcp 00:19:24.717 rmmod nvme_fabrics 00:19:24.717 rmmod nvme_keyring 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 612142 ']' 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 612142 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 612142 ']' 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 612142 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 612142 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 612142' 00:19:24.717 killing process with pid 612142 00:19:24.717 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 612142 00:19:24.718 [2024-07-25 13:48:21.583451] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:24.718 [2024-07-25 13:48:21.583485] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:24.718 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 612142 00:19:24.979 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:24.979 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:24.979 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:24.979 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:24.979 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:24.979 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.979 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.979 13:48:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.885 13:48:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:26.885 00:19:26.885 real 0m5.641s 00:19:26.885 user 0m2.119s 00:19:26.885 sys 0m1.914s 00:19:26.885 13:48:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:26.885 13:48:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:26.885 ************************************ 00:19:26.885 END TEST nvmf_async_init 00:19:26.885 ************************************ 00:19:26.885 13:48:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:26.885 13:48:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:26.885 13:48:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:26.885 13:48:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.145 ************************************ 00:19:27.145 START TEST dma 00:19:27.145 ************************************ 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:27.145 * Looking for test storage... 00:19:27.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.145 13:48:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:19:27.145 00:19:27.145 real 0m0.079s 00:19:27.145 user 0m0.027s 00:19:27.145 sys 0m0.058s 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:19:27.145 ************************************ 00:19:27.145 END TEST dma 00:19:27.145 ************************************ 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.145 ************************************ 00:19:27.145 START TEST nvmf_identify 00:19:27.145 ************************************ 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:27.145 * Looking for test storage... 00:19:27.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:27.145 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:19:27.146 13:48:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:29.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:29.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:29.050 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:29.050 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.050 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:29.051 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.051 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.051 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:29.051 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.051 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.051 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:29.051 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:29.051 13:48:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.051 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.051 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.051 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.051 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:29.051 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:29.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:19:29.309 00:19:29.309 --- 10.0.0.2 ping statistics --- 00:19:29.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.309 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:19:29.309 00:19:29.309 --- 10.0.0.1 ping statistics --- 00:19:29.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.309 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=614265 00:19:29.309 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:29.310 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.310 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 614265 00:19:29.310 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 614265 ']' 00:19:29.310 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.310 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.310 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.310 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.310 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.310 [2024-07-25 13:48:26.196439] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:29.310 [2024-07-25 13:48:26.196508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.310 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.310 [2024-07-25 13:48:26.258668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.569 [2024-07-25 13:48:26.369171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.569 [2024-07-25 13:48:26.369219] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.570 [2024-07-25 13:48:26.369244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.570 [2024-07-25 13:48:26.369255] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.570 [2024-07-25 13:48:26.369266] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.570 [2024-07-25 13:48:26.369328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.570 [2024-07-25 13:48:26.369405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.570 [2024-07-25 13:48:26.369493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.570 [2024-07-25 13:48:26.369495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.570 [2024-07-25 13:48:26.501562] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.570 Malloc0 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.570 [2024-07-25 13:48:26.583381] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.570 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:29.570 [ 00:19:29.570 { 00:19:29.570 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:29.570 "subtype": "Discovery", 00:19:29.570 "listen_addresses": [ 00:19:29.570 { 00:19:29.570 "trtype": "TCP", 00:19:29.570 "adrfam": "IPv4", 00:19:29.570 "traddr": "10.0.0.2", 00:19:29.570 "trsvcid": "4420" 00:19:29.570 } 00:19:29.570 ], 00:19:29.570 "allow_any_host": true, 00:19:29.570 "hosts": [] 00:19:29.570 }, 00:19:29.570 { 00:19:29.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.570 "subtype": "NVMe", 00:19:29.570 "listen_addresses": [ 00:19:29.570 { 00:19:29.570 "trtype": "TCP", 00:19:29.570 "adrfam": "IPv4", 00:19:29.831 "traddr": "10.0.0.2", 00:19:29.831 "trsvcid": "4420" 00:19:29.831 } 00:19:29.831 ], 00:19:29.831 "allow_any_host": true, 00:19:29.831 "hosts": [], 00:19:29.831 "serial_number": "SPDK00000000000001", 00:19:29.831 "model_number": "SPDK bdev Controller", 00:19:29.831 "max_namespaces": 32, 00:19:29.831 "min_cntlid": 1, 00:19:29.831 "max_cntlid": 65519, 00:19:29.831 "namespaces": [ 00:19:29.831 { 00:19:29.831 "nsid": 1, 00:19:29.831 "bdev_name": "Malloc0", 00:19:29.831 "name": "Malloc0", 00:19:29.831 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:29.831 "eui64": "ABCDEF0123456789", 00:19:29.831 "uuid": "898f2e4a-1812-4df3-8d1b-39407e3e88bf" 00:19:29.831 } 00:19:29.831 ] 00:19:29.831 } 00:19:29.831 ] 00:19:29.831 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.831 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:29.831 [2024-07-25 13:48:26.626218] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:29.831 [2024-07-25 13:48:26.626263] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614294 ] 00:19:29.831 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.831 [2024-07-25 13:48:26.660868] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:29.831 [2024-07-25 13:48:26.660939] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:29.831 [2024-07-25 13:48:26.660951] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:29.831 [2024-07-25 13:48:26.660971] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:29.831 [2024-07-25 13:48:26.660986] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:29.831 [2024-07-25 13:48:26.665117] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:29.831 [2024-07-25 13:48:26.665169] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7b1540 0 00:19:29.831 [2024-07-25 13:48:26.673074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:29.831 [2024-07-25 13:48:26.673102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:29.831 [2024-07-25 13:48:26.673113] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:29.831 [2024-07-25 13:48:26.673120] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:29.831 [2024-07-25 13:48:26.673182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.831 [2024-07-25 13:48:26.673195] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.831 [2024-07-25 13:48:26.673204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.831 [2024-07-25 13:48:26.673227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:29.831 [2024-07-25 13:48:26.673254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.832 [2024-07-25 13:48:26.681074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.832 [2024-07-25 13:48:26.681092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.832 [2024-07-25 13:48:26.681099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.832 [2024-07-25 13:48:26.681124] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:29.832 [2024-07-25 13:48:26.681136] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:29.832 [2024-07-25 13:48:26.681147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:29.832 [2024-07-25 13:48:26.681172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.832 [2024-07-25 13:48:26.681200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.832 [2024-07-25 13:48:26.681224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.832 [2024-07-25 13:48:26.681351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.832 [2024-07-25 13:48:26.681364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.832 [2024-07-25 13:48:26.681371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.832 [2024-07-25 13:48:26.681392] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:29.832 [2024-07-25 13:48:26.681406] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:29.832 [2024-07-25 13:48:26.681418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.832 [2024-07-25 13:48:26.681443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.832 [2024-07-25 13:48:26.681468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.832 [2024-07-25 13:48:26.681548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.832 [2024-07-25 13:48:26.681560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.832 [2024-07-25 13:48:26.681567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.832 [2024-07-25 13:48:26.681583] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:29.832 [2024-07-25 13:48:26.681598] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:29.832 [2024-07-25 13:48:26.681610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.832 [2024-07-25 13:48:26.681635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.832 [2024-07-25 13:48:26.681655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.832 [2024-07-25 13:48:26.681737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.832 [2024-07-25 13:48:26.681751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.832 [2024-07-25 13:48:26.681758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.832 [2024-07-25 13:48:26.681773] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:29.832 [2024-07-25 13:48:26.681790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.832 [2024-07-25 13:48:26.681817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.832 [2024-07-25 13:48:26.681838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.832 [2024-07-25 13:48:26.681913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.832 [2024-07-25 13:48:26.681925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.832 [2024-07-25 13:48:26.681932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.681939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.832 [2024-07-25 13:48:26.681949] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:29.832 [2024-07-25 13:48:26.681957] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:29.832 [2024-07-25 13:48:26.681970] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:29.832 [2024-07-25 13:48:26.682082] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:29.832 [2024-07-25 13:48:26.682092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:29.832 [2024-07-25 13:48:26.682108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.832 [2024-07-25 13:48:26.682138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.832 [2024-07-25 13:48:26.682160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.832 [2024-07-25 13:48:26.682284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.832 [2024-07-25 13:48:26.682297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.832 [2024-07-25 13:48:26.682304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.832 [2024-07-25 13:48:26.682319] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:29.832 [2024-07-25 13:48:26.682335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.832 [2024-07-25 13:48:26.682361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.832 [2024-07-25 13:48:26.682382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.832 [2024-07-25 13:48:26.682465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.832 [2024-07-25 13:48:26.682478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.832 [2024-07-25 13:48:26.682485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.832 [2024-07-25 13:48:26.682500] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:29.832 [2024-07-25 13:48:26.682508] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:29.832 [2024-07-25 13:48:26.682522] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:29.832 [2024-07-25 13:48:26.682536] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:29.832 [2024-07-25 13:48:26.682555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.832 [2024-07-25 13:48:26.682575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.832 [2024-07-25 13:48:26.682595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.832 [2024-07-25 13:48:26.682714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:29.832 [2024-07-25 13:48:26.682726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:29.832 [2024-07-25 13:48:26.682734] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682741] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b1540): datao=0, datal=4096, cccid=0 00:19:29.832 [2024-07-25 13:48:26.682749] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8113c0) on tqpair(0x7b1540): expected_datao=0, payload_size=4096 00:19:29.832 [2024-07-25 13:48:26.682757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682769] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682778] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:29.832 [2024-07-25 13:48:26.682796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.832 [2024-07-25 13:48:26.682806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.833 [2024-07-25 13:48:26.682812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.682819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.833 [2024-07-25 13:48:26.682832] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:29.833 [2024-07-25 13:48:26.682841] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:29.833 [2024-07-25 13:48:26.682849] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:29.833 [2024-07-25 13:48:26.682859] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:29.833 [2024-07-25 13:48:26.682867] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:29.833 [2024-07-25 13:48:26.682876] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:29.833 [2024-07-25 13:48:26.682891] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:29.833 [2024-07-25 13:48:26.682908] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.682917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.682923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.833 [2024-07-25 13:48:26.682935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:29.833 [2024-07-25 13:48:26.682955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.833 [2024-07-25 13:48:26.683044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.833 [2024-07-25 13:48:26.683056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.833 [2024-07-25 13:48:26.683071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.833 [2024-07-25 13:48:26.683092] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7b1540) 00:19:29.833 [2024-07-25 13:48:26.683116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.833 [2024-07-25 13:48:26.683127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7b1540) 00:19:29.833 [2024-07-25 13:48:26.683149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.833 [2024-07-25 13:48:26.683158] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7b1540) 00:19:29.833 [2024-07-25 13:48:26.683180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.833 [2024-07-25 13:48:26.683190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.833 [2024-07-25 13:48:26.683216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.833 [2024-07-25 13:48:26.683226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:29.833 [2024-07-25 13:48:26.683245] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:29.833 [2024-07-25 13:48:26.683259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b1540) 00:19:29.833 [2024-07-25 13:48:26.683276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.833 [2024-07-25 13:48:26.683299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8113c0, cid 0, qid 0 00:19:29.833 [2024-07-25 13:48:26.683310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811540, cid 1, qid 0 00:19:29.833 [2024-07-25 13:48:26.683318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8116c0, cid 2, qid 0 00:19:29.833 [2024-07-25 13:48:26.683326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.833 [2024-07-25 13:48:26.683333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8119c0, cid 4, qid 0 00:19:29.833 [2024-07-25 13:48:26.683479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.833 [2024-07-25 13:48:26.683493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.833 [2024-07-25 13:48:26.683500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8119c0) on tqpair=0x7b1540 00:19:29.833 [2024-07-25 13:48:26.683516] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:29.833 [2024-07-25 13:48:26.683526] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:29.833 [2024-07-25 13:48:26.683544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b1540) 00:19:29.833 [2024-07-25 13:48:26.683564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.833 [2024-07-25 13:48:26.683585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8119c0, cid 4, qid 0 00:19:29.833 [2024-07-25 13:48:26.683687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:29.833 [2024-07-25 13:48:26.683701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:29.833 [2024-07-25 13:48:26.683708] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683714] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b1540): datao=0, datal=4096, cccid=4 00:19:29.833 [2024-07-25 13:48:26.683722] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8119c0) on tqpair(0x7b1540): expected_datao=0, payload_size=4096 00:19:29.833 [2024-07-25 13:48:26.683729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683746] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.683755] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.833 [2024-07-25 13:48:26.727093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.833 [2024-07-25 13:48:26.727100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8119c0) on tqpair=0x7b1540 00:19:29.833 [2024-07-25 13:48:26.727134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:29.833 [2024-07-25 13:48:26.727176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b1540) 00:19:29.833 [2024-07-25 13:48:26.727200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.833 [2024-07-25 13:48:26.727212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7b1540) 00:19:29.833 [2024-07-25 13:48:26.727236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.833 [2024-07-25 13:48:26.727264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8119c0, cid 4, qid 0 00:19:29.833 [2024-07-25 13:48:26.727277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811b40, cid 5, qid 0 00:19:29.833 [2024-07-25 13:48:26.727423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:29.833 [2024-07-25 13:48:26.727438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:29.833 [2024-07-25 13:48:26.727445] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727452] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b1540): datao=0, datal=1024, cccid=4 00:19:29.833 [2024-07-25 13:48:26.727459] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8119c0) on tqpair(0x7b1540): expected_datao=0, payload_size=1024 00:19:29.833 [2024-07-25 13:48:26.727467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727477] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727484] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.833 [2024-07-25 13:48:26.727502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.833 [2024-07-25 13:48:26.727509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.833 [2024-07-25 13:48:26.727515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811b40) on tqpair=0x7b1540 00:19:29.833 [2024-07-25 13:48:26.768163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.833 [2024-07-25 13:48:26.768183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.834 [2024-07-25 13:48:26.768191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8119c0) on tqpair=0x7b1540 00:19:29.834 [2024-07-25 13:48:26.768217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b1540) 00:19:29.834 [2024-07-25 13:48:26.768238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.834 [2024-07-25 13:48:26.768268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8119c0, cid 4, qid 0 00:19:29.834 [2024-07-25 13:48:26.768366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:29.834 [2024-07-25 13:48:26.768381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:29.834 [2024-07-25 13:48:26.768388] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768394] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b1540): datao=0, datal=3072, cccid=4 00:19:29.834 [2024-07-25 13:48:26.768402] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8119c0) on tqpair(0x7b1540): expected_datao=0, payload_size=3072 00:19:29.834 [2024-07-25 13:48:26.768415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768426] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768433] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.834 [2024-07-25 13:48:26.768455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.834 [2024-07-25 13:48:26.768462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8119c0) on tqpair=0x7b1540 00:19:29.834 [2024-07-25 13:48:26.768485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7b1540) 00:19:29.834 [2024-07-25 13:48:26.768505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.834 [2024-07-25 13:48:26.768532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8119c0, cid 4, qid 0 00:19:29.834 [2024-07-25 13:48:26.768631] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:29.834 [2024-07-25 13:48:26.768645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:29.834 [2024-07-25 13:48:26.768651] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768658] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7b1540): datao=0, datal=8, cccid=4 00:19:29.834 [2024-07-25 13:48:26.768666] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8119c0) on tqpair(0x7b1540): expected_datao=0, payload_size=8 00:19:29.834 [2024-07-25 13:48:26.768673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768683] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.768690] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.809159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.834 [2024-07-25 13:48:26.809178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.834 [2024-07-25 13:48:26.809186] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.834 [2024-07-25 13:48:26.809193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8119c0) on tqpair=0x7b1540 00:19:29.834 ===================================================== 00:19:29.834 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:29.834 ===================================================== 00:19:29.834 Controller Capabilities/Features 00:19:29.834 ================================ 00:19:29.834 Vendor ID: 0000 00:19:29.834 Subsystem Vendor ID: 0000 00:19:29.834 Serial Number: .................... 00:19:29.834 Model Number: ........................................ 00:19:29.834 Firmware Version: 24.09 00:19:29.834 Recommended Arb Burst: 0 00:19:29.834 IEEE OUI Identifier: 00 00 00 00:19:29.834 Multi-path I/O 00:19:29.834 May have multiple subsystem ports: No 00:19:29.834 May have multiple controllers: No 00:19:29.834 Associated with SR-IOV VF: No 00:19:29.834 Max Data Transfer Size: 131072 00:19:29.834 Max Number of Namespaces: 0 00:19:29.834 Max Number of I/O Queues: 1024 00:19:29.834 NVMe Specification Version (VS): 1.3 00:19:29.834 NVMe Specification Version (Identify): 1.3 00:19:29.834 Maximum Queue Entries: 128 00:19:29.834 Contiguous Queues Required: Yes 00:19:29.834 Arbitration Mechanisms Supported 00:19:29.834 Weighted Round Robin: Not Supported 00:19:29.834 Vendor Specific: Not Supported 00:19:29.834 Reset Timeout: 15000 ms 00:19:29.834 Doorbell Stride: 4 bytes 00:19:29.834 NVM Subsystem Reset: Not Supported 00:19:29.834 Command Sets Supported 00:19:29.834 NVM Command Set: Supported 00:19:29.834 Boot Partition: Not Supported 00:19:29.834 Memory Page Size Minimum: 4096 bytes 00:19:29.834 Memory Page Size Maximum: 4096 bytes 00:19:29.834 Persistent Memory Region: Not Supported 00:19:29.834 Optional Asynchronous Events Supported 00:19:29.834 Namespace Attribute Notices: Not Supported 00:19:29.834 Firmware Activation Notices: Not Supported 00:19:29.834 ANA Change Notices: Not Supported 00:19:29.834 PLE Aggregate Log Change Notices: Not Supported 00:19:29.834 LBA Status Info Alert Notices: Not Supported 00:19:29.834 EGE Aggregate Log Change Notices: Not Supported 00:19:29.834 Normal NVM Subsystem Shutdown event: Not Supported 00:19:29.834 Zone Descriptor Change Notices: Not Supported 00:19:29.834 Discovery Log Change Notices: Supported 00:19:29.834 Controller Attributes 00:19:29.834 128-bit Host Identifier: Not Supported 00:19:29.834 Non-Operational Permissive Mode: Not Supported 00:19:29.834 NVM Sets: Not Supported 00:19:29.834 Read Recovery Levels: Not Supported 00:19:29.834 Endurance Groups: Not Supported 00:19:29.834 Predictable Latency Mode: Not Supported 00:19:29.834 Traffic Based Keep ALive: Not Supported 00:19:29.834 Namespace Granularity: Not Supported 00:19:29.834 SQ Associations: Not Supported 00:19:29.834 UUID List: Not Supported 00:19:29.834 Multi-Domain Subsystem: Not Supported 00:19:29.834 Fixed Capacity Management: Not Supported 00:19:29.834 Variable Capacity Management: Not Supported 00:19:29.834 Delete Endurance Group: Not Supported 00:19:29.834 Delete NVM Set: Not Supported 00:19:29.834 Extended LBA Formats Supported: Not Supported 00:19:29.834 Flexible Data Placement Supported: Not Supported 00:19:29.834 00:19:29.834 Controller Memory Buffer Support 00:19:29.834 ================================ 00:19:29.834 Supported: No 00:19:29.834 00:19:29.834 Persistent Memory Region Support 00:19:29.834 ================================ 00:19:29.834 Supported: No 00:19:29.834 00:19:29.834 Admin Command Set Attributes 00:19:29.834 ============================ 00:19:29.834 Security Send/Receive: Not Supported 00:19:29.834 Format NVM: Not Supported 00:19:29.834 Firmware Activate/Download: Not Supported 00:19:29.834 Namespace Management: Not Supported 00:19:29.834 Device Self-Test: Not Supported 00:19:29.834 Directives: Not Supported 00:19:29.834 NVMe-MI: Not Supported 00:19:29.834 Virtualization Management: Not Supported 00:19:29.834 Doorbell Buffer Config: Not Supported 00:19:29.834 Get LBA Status Capability: Not Supported 00:19:29.834 Command & Feature Lockdown Capability: Not Supported 00:19:29.834 Abort Command Limit: 1 00:19:29.834 Async Event Request Limit: 4 00:19:29.834 Number of Firmware Slots: N/A 00:19:29.834 Firmware Slot 1 Read-Only: N/A 00:19:29.834 Firmware Activation Without Reset: N/A 00:19:29.834 Multiple Update Detection Support: N/A 00:19:29.834 Firmware Update Granularity: No Information Provided 00:19:29.834 Per-Namespace SMART Log: No 00:19:29.834 Asymmetric Namespace Access Log Page: Not Supported 00:19:29.834 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:29.834 Command Effects Log Page: Not Supported 00:19:29.834 Get Log Page Extended Data: Supported 00:19:29.834 Telemetry Log Pages: Not Supported 00:19:29.834 Persistent Event Log Pages: Not Supported 00:19:29.834 Supported Log Pages Log Page: May Support 00:19:29.834 Commands Supported & Effects Log Page: Not Supported 00:19:29.834 Feature Identifiers & Effects Log Page:May Support 00:19:29.834 NVMe-MI Commands & Effects Log Page: May Support 00:19:29.834 Data Area 4 for Telemetry Log: Not Supported 00:19:29.834 Error Log Page Entries Supported: 128 00:19:29.834 Keep Alive: Not Supported 00:19:29.834 00:19:29.834 NVM Command Set Attributes 00:19:29.834 ========================== 00:19:29.834 Submission Queue Entry Size 00:19:29.834 Max: 1 00:19:29.834 Min: 1 00:19:29.834 Completion Queue Entry Size 00:19:29.834 Max: 1 00:19:29.834 Min: 1 00:19:29.835 Number of Namespaces: 0 00:19:29.835 Compare Command: Not Supported 00:19:29.835 Write Uncorrectable Command: Not Supported 00:19:29.835 Dataset Management Command: Not Supported 00:19:29.835 Write Zeroes Command: Not Supported 00:19:29.835 Set Features Save Field: Not Supported 00:19:29.835 Reservations: Not Supported 00:19:29.835 Timestamp: Not Supported 00:19:29.835 Copy: Not Supported 00:19:29.835 Volatile Write Cache: Not Present 00:19:29.835 Atomic Write Unit (Normal): 1 00:19:29.835 Atomic Write Unit (PFail): 1 00:19:29.835 Atomic Compare & Write Unit: 1 00:19:29.835 Fused Compare & Write: Supported 00:19:29.835 Scatter-Gather List 00:19:29.835 SGL Command Set: Supported 00:19:29.835 SGL Keyed: Supported 00:19:29.835 SGL Bit Bucket Descriptor: Not Supported 00:19:29.835 SGL Metadata Pointer: Not Supported 00:19:29.835 Oversized SGL: Not Supported 00:19:29.835 SGL Metadata Address: Not Supported 00:19:29.835 SGL Offset: Supported 00:19:29.835 Transport SGL Data Block: Not Supported 00:19:29.835 Replay Protected Memory Block: Not Supported 00:19:29.835 00:19:29.835 Firmware Slot Information 00:19:29.835 ========================= 00:19:29.835 Active slot: 0 00:19:29.835 00:19:29.835 00:19:29.835 Error Log 00:19:29.835 ========= 00:19:29.835 00:19:29.835 Active Namespaces 00:19:29.835 ================= 00:19:29.835 Discovery Log Page 00:19:29.835 ================== 00:19:29.835 Generation Counter: 2 00:19:29.835 Number of Records: 2 00:19:29.835 Record Format: 0 00:19:29.835 00:19:29.835 Discovery Log Entry 0 00:19:29.835 ---------------------- 00:19:29.835 Transport Type: 3 (TCP) 00:19:29.835 Address Family: 1 (IPv4) 00:19:29.835 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:29.835 Entry Flags: 00:19:29.835 Duplicate Returned Information: 1 00:19:29.835 Explicit Persistent Connection Support for Discovery: 1 00:19:29.835 Transport Requirements: 00:19:29.835 Secure Channel: Not Required 00:19:29.835 Port ID: 0 (0x0000) 00:19:29.835 Controller ID: 65535 (0xffff) 00:19:29.835 Admin Max SQ Size: 128 00:19:29.835 Transport Service Identifier: 4420 00:19:29.835 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:29.835 Transport Address: 10.0.0.2 00:19:29.835 Discovery Log Entry 1 00:19:29.835 ---------------------- 00:19:29.835 Transport Type: 3 (TCP) 00:19:29.835 Address Family: 1 (IPv4) 00:19:29.835 Subsystem Type: 2 (NVM Subsystem) 00:19:29.835 Entry Flags: 00:19:29.835 Duplicate Returned Information: 0 00:19:29.835 Explicit Persistent Connection Support for Discovery: 0 00:19:29.835 Transport Requirements: 00:19:29.835 Secure Channel: Not Required 00:19:29.835 Port ID: 0 (0x0000) 00:19:29.835 Controller ID: 65535 (0xffff) 00:19:29.835 Admin Max SQ Size: 128 00:19:29.835 Transport Service Identifier: 4420 00:19:29.835 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:29.835 Transport Address: 10.0.0.2 [2024-07-25 13:48:26.809312] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:29.835 [2024-07-25 13:48:26.809335] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8113c0) on tqpair=0x7b1540 00:19:29.835 [2024-07-25 13:48:26.809348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.835 [2024-07-25 13:48:26.809358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811540) on tqpair=0x7b1540 00:19:29.835 [2024-07-25 13:48:26.809365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.835 [2024-07-25 13:48:26.809374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8116c0) on tqpair=0x7b1540 00:19:29.835 [2024-07-25 13:48:26.809381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.835 [2024-07-25 13:48:26.809389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.835 [2024-07-25 13:48:26.809397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.835 [2024-07-25 13:48:26.809420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.809430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.809437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.835 [2024-07-25 13:48:26.809451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-25 13:48:26.809477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.835 [2024-07-25 13:48:26.809581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.835 [2024-07-25 13:48:26.809594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.835 [2024-07-25 13:48:26.809601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.809608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.835 [2024-07-25 13:48:26.809620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.809629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.809635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.835 [2024-07-25 13:48:26.809646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-25 13:48:26.809672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.835 [2024-07-25 13:48:26.809787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.835 [2024-07-25 13:48:26.809799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.835 [2024-07-25 13:48:26.809806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.809813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.835 [2024-07-25 13:48:26.809822] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:29.835 [2024-07-25 13:48:26.809831] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:29.835 [2024-07-25 13:48:26.809846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.809855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.809862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.835 [2024-07-25 13:48:26.809872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-25 13:48:26.809892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.835 [2024-07-25 13:48:26.809973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.835 [2024-07-25 13:48:26.809987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.835 [2024-07-25 13:48:26.809994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.810001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.835 [2024-07-25 13:48:26.810018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.810028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.810034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.835 [2024-07-25 13:48:26.810044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-25 13:48:26.810074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.835 [2024-07-25 13:48:26.810149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.835 [2024-07-25 13:48:26.810163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.835 [2024-07-25 13:48:26.810170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.810177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.835 [2024-07-25 13:48:26.810192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.810202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.810213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.835 [2024-07-25 13:48:26.810225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-25 13:48:26.810246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.835 [2024-07-25 13:48:26.810324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.835 [2024-07-25 13:48:26.810338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.835 [2024-07-25 13:48:26.810345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.810352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.835 [2024-07-25 13:48:26.810368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.835 [2024-07-25 13:48:26.810378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.836 [2024-07-25 13:48:26.810395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-25 13:48:26.810415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.836 [2024-07-25 13:48:26.810493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.836 [2024-07-25 13:48:26.810507] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.836 [2024-07-25 13:48:26.810513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810520] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.836 [2024-07-25 13:48:26.810536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.836 [2024-07-25 13:48:26.810563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-25 13:48:26.810583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.836 [2024-07-25 13:48:26.810658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.836 [2024-07-25 13:48:26.810670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.836 [2024-07-25 13:48:26.810676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.836 [2024-07-25 13:48:26.810699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.836 [2024-07-25 13:48:26.810726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-25 13:48:26.810746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.836 [2024-07-25 13:48:26.810820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.836 [2024-07-25 13:48:26.810832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.836 [2024-07-25 13:48:26.810838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.836 [2024-07-25 13:48:26.810861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.810877] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.836 [2024-07-25 13:48:26.810891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-25 13:48:26.810912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.836 [2024-07-25 13:48:26.810985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.836 [2024-07-25 13:48:26.810997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.836 [2024-07-25 13:48:26.811003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.811010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.836 [2024-07-25 13:48:26.811026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.811035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.811042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7b1540) 00:19:29.836 [2024-07-25 13:48:26.811052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-25 13:48:26.815089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x811840, cid 3, qid 0 00:19:29.836 [2024-07-25 13:48:26.815208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:29.836 [2024-07-25 13:48:26.815221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:29.836 [2024-07-25 13:48:26.815228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:29.836 [2024-07-25 13:48:26.815234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x811840) on tqpair=0x7b1540 00:19:29.836 [2024-07-25 13:48:26.815247] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:19:29.836 00:19:29.836 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:29.836 [2024-07-25 13:48:26.852765] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:29.836 [2024-07-25 13:48:26.852818] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614382 ] 00:19:30.098 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.098 [2024-07-25 13:48:26.890555] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:30.098 [2024-07-25 13:48:26.890617] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:30.098 [2024-07-25 13:48:26.890628] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:30.098 [2024-07-25 13:48:26.890645] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:30.098 [2024-07-25 13:48:26.890658] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:30.098 [2024-07-25 13:48:26.894102] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:30.098 [2024-07-25 13:48:26.894145] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18ae540 0 00:19:30.098 [2024-07-25 13:48:26.901068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:30.098 [2024-07-25 13:48:26.901113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:30.098 [2024-07-25 13:48:26.901125] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:30.098 [2024-07-25 13:48:26.901132] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:30.098 [2024-07-25 13:48:26.901188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.901201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.901208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.098 [2024-07-25 13:48:26.901223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:30.098 [2024-07-25 13:48:26.901249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.098 [2024-07-25 13:48:26.908072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.098 [2024-07-25 13:48:26.908091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.098 [2024-07-25 13:48:26.908098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.098 [2024-07-25 13:48:26.908121] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:30.098 [2024-07-25 13:48:26.908132] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:30.098 [2024-07-25 13:48:26.908142] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:30.098 [2024-07-25 13:48:26.908162] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.098 [2024-07-25 13:48:26.908189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.098 [2024-07-25 13:48:26.908214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.098 [2024-07-25 13:48:26.908343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.098 [2024-07-25 13:48:26.908355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.098 [2024-07-25 13:48:26.908362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.098 [2024-07-25 13:48:26.908381] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:30.098 [2024-07-25 13:48:26.908396] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:30.098 [2024-07-25 13:48:26.908408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.098 [2024-07-25 13:48:26.908433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.098 [2024-07-25 13:48:26.908455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.098 [2024-07-25 13:48:26.908537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.098 [2024-07-25 13:48:26.908551] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.098 [2024-07-25 13:48:26.908559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.098 [2024-07-25 13:48:26.908575] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:30.098 [2024-07-25 13:48:26.908589] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:30.098 [2024-07-25 13:48:26.908602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.098 [2024-07-25 13:48:26.908632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.098 [2024-07-25 13:48:26.908654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.098 [2024-07-25 13:48:26.908734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.098 [2024-07-25 13:48:26.908746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.098 [2024-07-25 13:48:26.908753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.098 [2024-07-25 13:48:26.908768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:30.098 [2024-07-25 13:48:26.908785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.098 [2024-07-25 13:48:26.908811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.098 [2024-07-25 13:48:26.908832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.098 [2024-07-25 13:48:26.908926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.098 [2024-07-25 13:48:26.908941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.098 [2024-07-25 13:48:26.908948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.908955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.098 [2024-07-25 13:48:26.908963] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:30.098 [2024-07-25 13:48:26.908972] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:30.098 [2024-07-25 13:48:26.908985] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:30.098 [2024-07-25 13:48:26.909096] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:30.098 [2024-07-25 13:48:26.909106] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:30.098 [2024-07-25 13:48:26.909121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.909129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.909135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.098 [2024-07-25 13:48:26.909146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.098 [2024-07-25 13:48:26.909168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.098 [2024-07-25 13:48:26.909299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.098 [2024-07-25 13:48:26.909313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.098 [2024-07-25 13:48:26.909320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.909327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.098 [2024-07-25 13:48:26.909335] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:30.098 [2024-07-25 13:48:26.909355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.909366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.098 [2024-07-25 13:48:26.909372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.098 [2024-07-25 13:48:26.909383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.098 [2024-07-25 13:48:26.909404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.099 [2024-07-25 13:48:26.909500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.099 [2024-07-25 13:48:26.909514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.099 [2024-07-25 13:48:26.909521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.909528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.099 [2024-07-25 13:48:26.909537] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:30.099 [2024-07-25 13:48:26.909546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.909559] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:30.099 [2024-07-25 13:48:26.909574] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.909589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.909597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.099 [2024-07-25 13:48:26.909608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.099 [2024-07-25 13:48:26.909630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.099 [2024-07-25 13:48:26.909757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.099 [2024-07-25 13:48:26.909772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.099 [2024-07-25 13:48:26.909779] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.909786] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ae540): datao=0, datal=4096, cccid=0 00:19:30.099 [2024-07-25 13:48:26.909794] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x190e3c0) on tqpair(0x18ae540): expected_datao=0, payload_size=4096 00:19:30.099 [2024-07-25 13:48:26.909802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.909821] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.909830] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.099 [2024-07-25 13:48:26.950182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.099 [2024-07-25 13:48:26.950190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.099 [2024-07-25 13:48:26.950209] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:30.099 [2024-07-25 13:48:26.950219] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:30.099 [2024-07-25 13:48:26.950227] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:30.099 [2024-07-25 13:48:26.950235] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:30.099 [2024-07-25 13:48:26.950244] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:30.099 [2024-07-25 13:48:26.950256] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.950272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.950290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.099 [2024-07-25 13:48:26.950318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.099 [2024-07-25 13:48:26.950342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.099 [2024-07-25 13:48:26.950471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.099 [2024-07-25 13:48:26.950484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.099 [2024-07-25 13:48:26.950491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.099 [2024-07-25 13:48:26.950510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18ae540) 00:19:30.099 [2024-07-25 13:48:26.950535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.099 [2024-07-25 13:48:26.950545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18ae540) 00:19:30.099 [2024-07-25 13:48:26.950567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.099 [2024-07-25 13:48:26.950577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18ae540) 00:19:30.099 [2024-07-25 13:48:26.950599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.099 [2024-07-25 13:48:26.950609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950616] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ae540) 00:19:30.099 [2024-07-25 13:48:26.950631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.099 [2024-07-25 13:48:26.950641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.950660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.950674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ae540) 00:19:30.099 [2024-07-25 13:48:26.950692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.099 [2024-07-25 13:48:26.950715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e3c0, cid 0, qid 0 00:19:30.099 [2024-07-25 13:48:26.950730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e540, cid 1, qid 0 00:19:30.099 [2024-07-25 13:48:26.950739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e6c0, cid 2, qid 0 00:19:30.099 [2024-07-25 13:48:26.950747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e840, cid 3, qid 0 00:19:30.099 [2024-07-25 13:48:26.950754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e9c0, cid 4, qid 0 00:19:30.099 [2024-07-25 13:48:26.950892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.099 [2024-07-25 13:48:26.950906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.099 [2024-07-25 13:48:26.950913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e9c0) on tqpair=0x18ae540 00:19:30.099 [2024-07-25 13:48:26.950931] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:30.099 [2024-07-25 13:48:26.950940] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.950959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.950973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.950984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.950999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ae540) 00:19:30.099 [2024-07-25 13:48:26.951010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:30.099 [2024-07-25 13:48:26.951031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e9c0, cid 4, qid 0 00:19:30.099 [2024-07-25 13:48:26.955141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.099 [2024-07-25 13:48:26.955159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.099 [2024-07-25 13:48:26.955166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.955173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e9c0) on tqpair=0x18ae540 00:19:30.099 [2024-07-25 13:48:26.955245] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.955267] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:30.099 [2024-07-25 13:48:26.955284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.099 [2024-07-25 13:48:26.955293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ae540) 00:19:30.099 [2024-07-25 13:48:26.955304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.099 [2024-07-25 13:48:26.955327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e9c0, cid 4, qid 0 00:19:30.100 [2024-07-25 13:48:26.955465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.100 [2024-07-25 13:48:26.955480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.100 [2024-07-25 13:48:26.955487] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955494] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ae540): datao=0, datal=4096, cccid=4 00:19:30.100 [2024-07-25 13:48:26.955502] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x190e9c0) on tqpair(0x18ae540): expected_datao=0, payload_size=4096 00:19:30.100 [2024-07-25 13:48:26.955514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955525] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955533] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.100 [2024-07-25 13:48:26.955555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.100 [2024-07-25 13:48:26.955561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e9c0) on tqpair=0x18ae540 00:19:30.100 [2024-07-25 13:48:26.955587] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:30.100 [2024-07-25 13:48:26.955611] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.955630] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.955645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ae540) 00:19:30.100 [2024-07-25 13:48:26.955664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.100 [2024-07-25 13:48:26.955686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e9c0, cid 4, qid 0 00:19:30.100 [2024-07-25 13:48:26.955804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.100 [2024-07-25 13:48:26.955819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.100 [2024-07-25 13:48:26.955826] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955832] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ae540): datao=0, datal=4096, cccid=4 00:19:30.100 [2024-07-25 13:48:26.955840] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x190e9c0) on tqpair(0x18ae540): expected_datao=0, payload_size=4096 00:19:30.100 [2024-07-25 13:48:26.955848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955858] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955866] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955888] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.100 [2024-07-25 13:48:26.955899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.100 [2024-07-25 13:48:26.955906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e9c0) on tqpair=0x18ae540 00:19:30.100 [2024-07-25 13:48:26.955938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.955959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.955974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.955982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ae540) 00:19:30.100 [2024-07-25 13:48:26.955993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.100 [2024-07-25 13:48:26.956014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e9c0, cid 4, qid 0 00:19:30.100 [2024-07-25 13:48:26.956123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.100 [2024-07-25 13:48:26.956139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.100 [2024-07-25 13:48:26.956146] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956156] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ae540): datao=0, datal=4096, cccid=4 00:19:30.100 [2024-07-25 13:48:26.956165] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x190e9c0) on tqpair(0x18ae540): expected_datao=0, payload_size=4096 00:19:30.100 [2024-07-25 13:48:26.956172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956183] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.100 [2024-07-25 13:48:26.956219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.100 [2024-07-25 13:48:26.956226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e9c0) on tqpair=0x18ae540 00:19:30.100 [2024-07-25 13:48:26.956247] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.956263] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.956279] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.956293] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.956303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.956312] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.956322] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:30.100 [2024-07-25 13:48:26.956330] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:30.100 [2024-07-25 13:48:26.956340] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:30.100 [2024-07-25 13:48:26.956361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ae540) 00:19:30.100 [2024-07-25 13:48:26.956381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.100 [2024-07-25 13:48:26.956393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18ae540) 00:19:30.100 [2024-07-25 13:48:26.956416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.100 [2024-07-25 13:48:26.956442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e9c0, cid 4, qid 0 00:19:30.100 [2024-07-25 13:48:26.956454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190eb40, cid 5, qid 0 00:19:30.100 [2024-07-25 13:48:26.956586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.100 [2024-07-25 13:48:26.956600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.100 [2024-07-25 13:48:26.956607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e9c0) on tqpair=0x18ae540 00:19:30.100 [2024-07-25 13:48:26.956625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.100 [2024-07-25 13:48:26.956638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.100 [2024-07-25 13:48:26.956645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190eb40) on tqpair=0x18ae540 00:19:30.100 [2024-07-25 13:48:26.956668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18ae540) 00:19:30.100 [2024-07-25 13:48:26.956688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.100 [2024-07-25 13:48:26.956709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190eb40, cid 5, qid 0 00:19:30.100 [2024-07-25 13:48:26.956839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.100 [2024-07-25 13:48:26.956853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.100 [2024-07-25 13:48:26.956860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190eb40) on tqpair=0x18ae540 00:19:30.100 [2024-07-25 13:48:26.956883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.956893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18ae540) 00:19:30.100 [2024-07-25 13:48:26.956903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.100 [2024-07-25 13:48:26.956924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190eb40, cid 5, qid 0 00:19:30.100 [2024-07-25 13:48:26.957000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.100 [2024-07-25 13:48:26.957014] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.100 [2024-07-25 13:48:26.957021] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.100 [2024-07-25 13:48:26.957028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190eb40) on tqpair=0x18ae540 00:19:30.101 [2024-07-25 13:48:26.957044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957053] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18ae540) 00:19:30.101 [2024-07-25 13:48:26.957075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.101 [2024-07-25 13:48:26.957097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190eb40, cid 5, qid 0 00:19:30.101 [2024-07-25 13:48:26.957192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.101 [2024-07-25 13:48:26.957207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.101 [2024-07-25 13:48:26.957214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190eb40) on tqpair=0x18ae540 00:19:30.101 [2024-07-25 13:48:26.957247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18ae540) 00:19:30.101 [2024-07-25 13:48:26.957268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.101 [2024-07-25 13:48:26.957281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18ae540) 00:19:30.101 [2024-07-25 13:48:26.957299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.101 [2024-07-25 13:48:26.957311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18ae540) 00:19:30.101 [2024-07-25 13:48:26.957331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.101 [2024-07-25 13:48:26.957345] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18ae540) 00:19:30.101 [2024-07-25 13:48:26.957363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.101 [2024-07-25 13:48:26.957385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190eb40, cid 5, qid 0 00:19:30.101 [2024-07-25 13:48:26.957396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e9c0, cid 4, qid 0 00:19:30.101 [2024-07-25 13:48:26.957404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190ecc0, cid 6, qid 0 00:19:30.101 [2024-07-25 13:48:26.957412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190ee40, cid 7, qid 0 00:19:30.101 [2024-07-25 13:48:26.957607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.101 [2024-07-25 13:48:26.957622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.101 [2024-07-25 13:48:26.957629] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957636] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ae540): datao=0, datal=8192, cccid=5 00:19:30.101 [2024-07-25 13:48:26.957643] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x190eb40) on tqpair(0x18ae540): expected_datao=0, payload_size=8192 00:19:30.101 [2024-07-25 13:48:26.957651] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957670] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957679] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.101 [2024-07-25 13:48:26.957702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.101 [2024-07-25 13:48:26.957709] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957715] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ae540): datao=0, datal=512, cccid=4 00:19:30.101 [2024-07-25 13:48:26.957723] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x190e9c0) on tqpair(0x18ae540): expected_datao=0, payload_size=512 00:19:30.101 [2024-07-25 13:48:26.957730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957740] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957747] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.101 [2024-07-25 13:48:26.957764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.101 [2024-07-25 13:48:26.957771] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957777] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ae540): datao=0, datal=512, cccid=6 00:19:30.101 [2024-07-25 13:48:26.957785] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x190ecc0) on tqpair(0x18ae540): expected_datao=0, payload_size=512 00:19:30.101 [2024-07-25 13:48:26.957792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957802] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957809] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:30.101 [2024-07-25 13:48:26.957826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:30.101 [2024-07-25 13:48:26.957832] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957839] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18ae540): datao=0, datal=4096, cccid=7 00:19:30.101 [2024-07-25 13:48:26.957850] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x190ee40) on tqpair(0x18ae540): expected_datao=0, payload_size=4096 00:19:30.101 [2024-07-25 13:48:26.957858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957868] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957875] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.101 [2024-07-25 13:48:26.957896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.101 [2024-07-25 13:48:26.957903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190eb40) on tqpair=0x18ae540 00:19:30.101 [2024-07-25 13:48:26.957928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.101 [2024-07-25 13:48:26.957939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.101 [2024-07-25 13:48:26.957946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.957952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e9c0) on tqpair=0x18ae540 00:19:30.101 [2024-07-25 13:48:26.957968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.101 [2024-07-25 13:48:26.957995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.101 [2024-07-25 13:48:26.958001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.958007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190ecc0) on tqpair=0x18ae540 00:19:30.101 [2024-07-25 13:48:26.958018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.101 [2024-07-25 13:48:26.958028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.101 [2024-07-25 13:48:26.958034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.101 [2024-07-25 13:48:26.958041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190ee40) on tqpair=0x18ae540 00:19:30.101 ===================================================== 00:19:30.101 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:30.101 ===================================================== 00:19:30.101 Controller Capabilities/Features 00:19:30.101 ================================ 00:19:30.101 Vendor ID: 8086 00:19:30.101 Subsystem Vendor ID: 8086 00:19:30.101 Serial Number: SPDK00000000000001 00:19:30.101 Model Number: SPDK bdev Controller 00:19:30.101 Firmware Version: 24.09 00:19:30.101 Recommended Arb Burst: 6 00:19:30.101 IEEE OUI Identifier: e4 d2 5c 00:19:30.101 Multi-path I/O 00:19:30.101 May have multiple subsystem ports: Yes 00:19:30.101 May have multiple controllers: Yes 00:19:30.101 Associated with SR-IOV VF: No 00:19:30.101 Max Data Transfer Size: 131072 00:19:30.101 Max Number of Namespaces: 32 00:19:30.101 Max Number of I/O Queues: 127 00:19:30.101 NVMe Specification Version (VS): 1.3 00:19:30.101 NVMe Specification Version (Identify): 1.3 00:19:30.101 Maximum Queue Entries: 128 00:19:30.101 Contiguous Queues Required: Yes 00:19:30.101 Arbitration Mechanisms Supported 00:19:30.101 Weighted Round Robin: Not Supported 00:19:30.101 Vendor Specific: Not Supported 00:19:30.101 Reset Timeout: 15000 ms 00:19:30.101 Doorbell Stride: 4 bytes 00:19:30.101 NVM Subsystem Reset: Not Supported 00:19:30.101 Command Sets Supported 00:19:30.101 NVM Command Set: Supported 00:19:30.101 Boot Partition: Not Supported 00:19:30.101 Memory Page Size Minimum: 4096 bytes 00:19:30.101 Memory Page Size Maximum: 4096 bytes 00:19:30.101 Persistent Memory Region: Not Supported 00:19:30.101 Optional Asynchronous Events Supported 00:19:30.101 Namespace Attribute Notices: Supported 00:19:30.101 Firmware Activation Notices: Not Supported 00:19:30.101 ANA Change Notices: Not Supported 00:19:30.101 PLE Aggregate Log Change Notices: Not Supported 00:19:30.101 LBA Status Info Alert Notices: Not Supported 00:19:30.101 EGE Aggregate Log Change Notices: Not Supported 00:19:30.102 Normal NVM Subsystem Shutdown event: Not Supported 00:19:30.102 Zone Descriptor Change Notices: Not Supported 00:19:30.102 Discovery Log Change Notices: Not Supported 00:19:30.102 Controller Attributes 00:19:30.102 128-bit Host Identifier: Supported 00:19:30.102 Non-Operational Permissive Mode: Not Supported 00:19:30.102 NVM Sets: Not Supported 00:19:30.102 Read Recovery Levels: Not Supported 00:19:30.102 Endurance Groups: Not Supported 00:19:30.102 Predictable Latency Mode: Not Supported 00:19:30.102 Traffic Based Keep ALive: Not Supported 00:19:30.102 Namespace Granularity: Not Supported 00:19:30.102 SQ Associations: Not Supported 00:19:30.102 UUID List: Not Supported 00:19:30.102 Multi-Domain Subsystem: Not Supported 00:19:30.102 Fixed Capacity Management: Not Supported 00:19:30.102 Variable Capacity Management: Not Supported 00:19:30.102 Delete Endurance Group: Not Supported 00:19:30.102 Delete NVM Set: Not Supported 00:19:30.102 Extended LBA Formats Supported: Not Supported 00:19:30.102 Flexible Data Placement Supported: Not Supported 00:19:30.102 00:19:30.102 Controller Memory Buffer Support 00:19:30.102 ================================ 00:19:30.102 Supported: No 00:19:30.102 00:19:30.102 Persistent Memory Region Support 00:19:30.102 ================================ 00:19:30.102 Supported: No 00:19:30.102 00:19:30.102 Admin Command Set Attributes 00:19:30.102 ============================ 00:19:30.102 Security Send/Receive: Not Supported 00:19:30.102 Format NVM: Not Supported 00:19:30.102 Firmware Activate/Download: Not Supported 00:19:30.102 Namespace Management: Not Supported 00:19:30.102 Device Self-Test: Not Supported 00:19:30.102 Directives: Not Supported 00:19:30.102 NVMe-MI: Not Supported 00:19:30.102 Virtualization Management: Not Supported 00:19:30.102 Doorbell Buffer Config: Not Supported 00:19:30.102 Get LBA Status Capability: Not Supported 00:19:30.102 Command & Feature Lockdown Capability: Not Supported 00:19:30.102 Abort Command Limit: 4 00:19:30.102 Async Event Request Limit: 4 00:19:30.102 Number of Firmware Slots: N/A 00:19:30.102 Firmware Slot 1 Read-Only: N/A 00:19:30.102 Firmware Activation Without Reset: N/A 00:19:30.102 Multiple Update Detection Support: N/A 00:19:30.102 Firmware Update Granularity: No Information Provided 00:19:30.102 Per-Namespace SMART Log: No 00:19:30.102 Asymmetric Namespace Access Log Page: Not Supported 00:19:30.102 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:30.102 Command Effects Log Page: Supported 00:19:30.102 Get Log Page Extended Data: Supported 00:19:30.102 Telemetry Log Pages: Not Supported 00:19:30.102 Persistent Event Log Pages: Not Supported 00:19:30.102 Supported Log Pages Log Page: May Support 00:19:30.102 Commands Supported & Effects Log Page: Not Supported 00:19:30.102 Feature Identifiers & Effects Log Page:May Support 00:19:30.102 NVMe-MI Commands & Effects Log Page: May Support 00:19:30.102 Data Area 4 for Telemetry Log: Not Supported 00:19:30.102 Error Log Page Entries Supported: 128 00:19:30.102 Keep Alive: Supported 00:19:30.102 Keep Alive Granularity: 10000 ms 00:19:30.102 00:19:30.102 NVM Command Set Attributes 00:19:30.102 ========================== 00:19:30.102 Submission Queue Entry Size 00:19:30.102 Max: 64 00:19:30.102 Min: 64 00:19:30.102 Completion Queue Entry Size 00:19:30.102 Max: 16 00:19:30.102 Min: 16 00:19:30.102 Number of Namespaces: 32 00:19:30.102 Compare Command: Supported 00:19:30.102 Write Uncorrectable Command: Not Supported 00:19:30.102 Dataset Management Command: Supported 00:19:30.102 Write Zeroes Command: Supported 00:19:30.102 Set Features Save Field: Not Supported 00:19:30.102 Reservations: Supported 00:19:30.102 Timestamp: Not Supported 00:19:30.102 Copy: Supported 00:19:30.102 Volatile Write Cache: Present 00:19:30.102 Atomic Write Unit (Normal): 1 00:19:30.102 Atomic Write Unit (PFail): 1 00:19:30.102 Atomic Compare & Write Unit: 1 00:19:30.102 Fused Compare & Write: Supported 00:19:30.102 Scatter-Gather List 00:19:30.102 SGL Command Set: Supported 00:19:30.102 SGL Keyed: Supported 00:19:30.102 SGL Bit Bucket Descriptor: Not Supported 00:19:30.102 SGL Metadata Pointer: Not Supported 00:19:30.102 Oversized SGL: Not Supported 00:19:30.102 SGL Metadata Address: Not Supported 00:19:30.102 SGL Offset: Supported 00:19:30.102 Transport SGL Data Block: Not Supported 00:19:30.102 Replay Protected Memory Block: Not Supported 00:19:30.102 00:19:30.102 Firmware Slot Information 00:19:30.102 ========================= 00:19:30.102 Active slot: 1 00:19:30.102 Slot 1 Firmware Revision: 24.09 00:19:30.102 00:19:30.102 00:19:30.102 Commands Supported and Effects 00:19:30.102 ============================== 00:19:30.102 Admin Commands 00:19:30.102 -------------- 00:19:30.102 Get Log Page (02h): Supported 00:19:30.102 Identify (06h): Supported 00:19:30.102 Abort (08h): Supported 00:19:30.102 Set Features (09h): Supported 00:19:30.102 Get Features (0Ah): Supported 00:19:30.102 Asynchronous Event Request (0Ch): Supported 00:19:30.102 Keep Alive (18h): Supported 00:19:30.102 I/O Commands 00:19:30.102 ------------ 00:19:30.102 Flush (00h): Supported LBA-Change 00:19:30.102 Write (01h): Supported LBA-Change 00:19:30.102 Read (02h): Supported 00:19:30.102 Compare (05h): Supported 00:19:30.102 Write Zeroes (08h): Supported LBA-Change 00:19:30.102 Dataset Management (09h): Supported LBA-Change 00:19:30.102 Copy (19h): Supported LBA-Change 00:19:30.102 00:19:30.102 Error Log 00:19:30.102 ========= 00:19:30.102 00:19:30.102 Arbitration 00:19:30.102 =========== 00:19:30.102 Arbitration Burst: 1 00:19:30.102 00:19:30.102 Power Management 00:19:30.102 ================ 00:19:30.102 Number of Power States: 1 00:19:30.102 Current Power State: Power State #0 00:19:30.102 Power State #0: 00:19:30.102 Max Power: 0.00 W 00:19:30.102 Non-Operational State: Operational 00:19:30.102 Entry Latency: Not Reported 00:19:30.102 Exit Latency: Not Reported 00:19:30.102 Relative Read Throughput: 0 00:19:30.102 Relative Read Latency: 0 00:19:30.102 Relative Write Throughput: 0 00:19:30.102 Relative Write Latency: 0 00:19:30.102 Idle Power: Not Reported 00:19:30.102 Active Power: Not Reported 00:19:30.102 Non-Operational Permissive Mode: Not Supported 00:19:30.102 00:19:30.102 Health Information 00:19:30.102 ================== 00:19:30.102 Critical Warnings: 00:19:30.102 Available Spare Space: OK 00:19:30.102 Temperature: OK 00:19:30.102 Device Reliability: OK 00:19:30.102 Read Only: No 00:19:30.102 Volatile Memory Backup: OK 00:19:30.102 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:30.102 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:30.102 Available Spare: 0% 00:19:30.102 Available Spare Threshold: 0% 00:19:30.102 Life Percentage Used:[2024-07-25 13:48:26.958187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.102 [2024-07-25 13:48:26.958200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18ae540) 00:19:30.102 [2024-07-25 13:48:26.958211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-25 13:48:26.958234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190ee40, cid 7, qid 0 00:19:30.102 [2024-07-25 13:48:26.958370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.102 [2024-07-25 13:48:26.958383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.102 [2024-07-25 13:48:26.958390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.102 [2024-07-25 13:48:26.958396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190ee40) on tqpair=0x18ae540 00:19:30.102 [2024-07-25 13:48:26.958444] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:30.102 [2024-07-25 13:48:26.958464] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e3c0) on tqpair=0x18ae540 00:19:30.102 [2024-07-25 13:48:26.958476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.102 [2024-07-25 13:48:26.958485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e540) on tqpair=0x18ae540 00:19:30.102 [2024-07-25 13:48:26.958493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.102 [2024-07-25 13:48:26.958502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e6c0) on tqpair=0x18ae540 00:19:30.102 [2024-07-25 13:48:26.958509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.103 [2024-07-25 13:48:26.958518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e840) on tqpair=0x18ae540 00:19:30.103 [2024-07-25 13:48:26.958529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.103 [2024-07-25 13:48:26.958544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.958553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.958559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ae540) 00:19:30.103 [2024-07-25 13:48:26.958570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-25 13:48:26.958595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e840, cid 3, qid 0 00:19:30.103 [2024-07-25 13:48:26.958721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.103 [2024-07-25 13:48:26.958734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.103 [2024-07-25 13:48:26.958741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.958748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e840) on tqpair=0x18ae540 00:19:30.103 [2024-07-25 13:48:26.958759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.958767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.958774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ae540) 00:19:30.103 [2024-07-25 13:48:26.958784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-25 13:48:26.958810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e840, cid 3, qid 0 00:19:30.103 [2024-07-25 13:48:26.958914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.103 [2024-07-25 13:48:26.958928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.103 [2024-07-25 13:48:26.958935] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.958942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e840) on tqpair=0x18ae540 00:19:30.103 [2024-07-25 13:48:26.958950] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:30.103 [2024-07-25 13:48:26.958959] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:30.103 [2024-07-25 13:48:26.958975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.958984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.958991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ae540) 00:19:30.103 [2024-07-25 13:48:26.959001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-25 13:48:26.959022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e840, cid 3, qid 0 00:19:30.103 [2024-07-25 13:48:26.963075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.103 [2024-07-25 13:48:26.963092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.103 [2024-07-25 13:48:26.963099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.963106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e840) on tqpair=0x18ae540 00:19:30.103 [2024-07-25 13:48:26.963124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.963134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.963141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18ae540) 00:19:30.103 [2024-07-25 13:48:26.963152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-25 13:48:26.963174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x190e840, cid 3, qid 0 00:19:30.103 [2024-07-25 13:48:26.963313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:30.103 [2024-07-25 13:48:26.963326] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:30.103 [2024-07-25 13:48:26.963333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:30.103 [2024-07-25 13:48:26.963340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x190e840) on tqpair=0x18ae540 00:19:30.103 [2024-07-25 13:48:26.963354] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:19:30.103 0% 00:19:30.103 Data Units Read: 0 00:19:30.103 Data Units Written: 0 00:19:30.103 Host Read Commands: 0 00:19:30.103 Host Write Commands: 0 00:19:30.103 Controller Busy Time: 0 minutes 00:19:30.103 Power Cycles: 0 00:19:30.103 Power On Hours: 0 hours 00:19:30.103 Unsafe Shutdowns: 0 00:19:30.103 Unrecoverable Media Errors: 0 00:19:30.103 Lifetime Error Log Entries: 0 00:19:30.103 Warning Temperature Time: 0 minutes 00:19:30.103 Critical Temperature Time: 0 minutes 00:19:30.103 00:19:30.103 Number of Queues 00:19:30.103 ================ 00:19:30.103 Number of I/O Submission Queues: 127 00:19:30.103 Number of I/O Completion Queues: 127 00:19:30.103 00:19:30.103 Active Namespaces 00:19:30.103 ================= 00:19:30.103 Namespace ID:1 00:19:30.103 Error Recovery Timeout: Unlimited 00:19:30.103 Command Set Identifier: NVM (00h) 00:19:30.103 Deallocate: Supported 00:19:30.103 Deallocated/Unwritten Error: Not Supported 00:19:30.103 Deallocated Read Value: Unknown 00:19:30.103 Deallocate in Write Zeroes: Not Supported 00:19:30.103 Deallocated Guard Field: 0xFFFF 00:19:30.103 Flush: Supported 00:19:30.103 Reservation: Supported 00:19:30.103 Namespace Sharing Capabilities: Multiple Controllers 00:19:30.103 Size (in LBAs): 131072 (0GiB) 00:19:30.103 Capacity (in LBAs): 131072 (0GiB) 00:19:30.103 Utilization (in LBAs): 131072 (0GiB) 00:19:30.103 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:30.103 EUI64: ABCDEF0123456789 00:19:30.103 UUID: 898f2e4a-1812-4df3-8d1b-39407e3e88bf 00:19:30.103 Thin Provisioning: Not Supported 00:19:30.103 Per-NS Atomic Units: Yes 00:19:30.103 Atomic Boundary Size (Normal): 0 00:19:30.103 Atomic Boundary Size (PFail): 0 00:19:30.103 Atomic Boundary Offset: 0 00:19:30.103 Maximum Single Source Range Length: 65535 00:19:30.103 Maximum Copy Length: 65535 00:19:30.103 Maximum Source Range Count: 1 00:19:30.103 NGUID/EUI64 Never Reused: No 00:19:30.103 Namespace Write Protected: No 00:19:30.103 Number of LBA Formats: 1 00:19:30.103 Current LBA Format: LBA Format #00 00:19:30.103 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:30.103 00:19:30.103 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:30.103 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.103 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.103 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:30.103 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.103 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:30.103 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:30.104 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.104 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:19:30.104 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.104 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:19:30.104 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.104 13:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.104 rmmod nvme_tcp 00:19:30.104 rmmod nvme_fabrics 00:19:30.104 rmmod nvme_keyring 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 614265 ']' 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 614265 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 614265 ']' 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 614265 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 614265 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 614265' 00:19:30.104 killing process with pid 614265 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 614265 00:19:30.104 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 614265 00:19:30.362 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.362 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.362 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.362 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.362 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.362 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.362 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.362 13:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.904 13:48:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:32.904 00:19:32.904 real 0m5.357s 00:19:32.904 user 0m4.534s 00:19:32.904 sys 0m1.758s 00:19:32.904 13:48:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:32.904 13:48:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:32.904 ************************************ 00:19:32.904 END TEST nvmf_identify 00:19:32.904 ************************************ 00:19:32.904 13:48:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:32.904 13:48:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:32.904 13:48:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:32.904 13:48:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.904 ************************************ 00:19:32.904 START TEST nvmf_perf 00:19:32.904 ************************************ 00:19:32.904 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:32.905 * Looking for test storage... 00:19:32.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:19:32.905 13:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.807 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:34.808 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:34.808 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:34.808 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:34.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:34.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:19:34.808 00:19:34.808 --- 10.0.0.2 ping statistics --- 00:19:34.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.808 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:19:34.808 00:19:34.808 --- 10.0.0.1 ping statistics --- 00:19:34.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.808 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=616345 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 616345 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 616345 ']' 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.808 13:48:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:34.808 [2024-07-25 13:48:31.743563] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:34.808 [2024-07-25 13:48:31.743634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.808 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.808 [2024-07-25 13:48:31.804972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.067 [2024-07-25 13:48:31.909985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.067 [2024-07-25 13:48:31.910033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.067 [2024-07-25 13:48:31.910067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.067 [2024-07-25 13:48:31.910079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.067 [2024-07-25 13:48:31.910090] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.067 [2024-07-25 13:48:31.910154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.067 [2024-07-25 13:48:31.910207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.067 [2024-07-25 13:48:31.910210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.067 [2024-07-25 13:48:31.910186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.067 13:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.067 13:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:19:35.067 13:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:35.067 13:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.067 13:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:35.067 13:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.067 13:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:35.067 13:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:19:38.344 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:19:38.344 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:38.602 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:19:38.602 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:38.859 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:38.859 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:19:38.859 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:38.859 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:38.859 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.859 [2024-07-25 13:48:35.876115] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.116 13:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:39.373 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:39.373 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:39.630 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:39.631 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:39.631 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.888 [2024-07-25 13:48:36.879819] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.888 13:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:40.146 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:19:40.146 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:19:40.146 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:40.146 13:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:19:41.515 Initializing NVMe Controllers 00:19:41.515 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:19:41.515 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:19:41.515 Initialization complete. Launching workers. 00:19:41.515 ======================================================== 00:19:41.515 Latency(us) 00:19:41.515 Device Information : IOPS MiB/s Average min max 00:19:41.515 PCIE (0000:88:00.0) NSID 1 from core 0: 84741.93 331.02 377.10 43.79 6264.87 00:19:41.515 ======================================================== 00:19:41.515 Total : 84741.93 331.02 377.10 43.79 6264.87 00:19:41.515 00:19:41.515 13:48:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:41.515 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.882 Initializing NVMe Controllers 00:19:42.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:42.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:42.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:42.883 Initialization complete. Launching workers. 00:19:42.883 ======================================================== 00:19:42.883 Latency(us) 00:19:42.883 Device Information : IOPS MiB/s Average min max 00:19:42.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.71 0.32 12458.59 155.57 44833.27 00:19:42.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.78 0.24 16314.85 7013.85 48843.31 00:19:42.883 ======================================================== 00:19:42.883 Total : 143.48 0.56 14118.92 155.57 48843.31 00:19:42.883 00:19:42.883 13:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:42.883 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.816 Initializing NVMe Controllers 00:19:43.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:43.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:43.816 Initialization complete. Launching workers. 00:19:43.816 ======================================================== 00:19:43.816 Latency(us) 00:19:43.816 Device Information : IOPS MiB/s Average min max 00:19:43.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8445.14 32.99 3788.66 588.38 10065.59 00:19:43.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3849.77 15.04 8354.41 6805.16 19021.11 00:19:43.816 ======================================================== 00:19:43.816 Total : 12294.91 48.03 5218.28 588.38 19021.11 00:19:43.816 00:19:43.816 13:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:19:43.816 13:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:19:43.816 13:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:43.816 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.345 Initializing NVMe Controllers 00:19:46.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:46.345 Controller IO queue size 128, less than required. 00:19:46.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.345 Controller IO queue size 128, less than required. 00:19:46.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:46.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:46.345 Initialization complete. Launching workers. 00:19:46.345 ======================================================== 00:19:46.345 Latency(us) 00:19:46.345 Device Information : IOPS MiB/s Average min max 00:19:46.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1749.22 437.30 74888.03 48047.15 111246.02 00:19:46.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 559.27 139.82 241490.06 77309.82 373689.24 00:19:46.345 ======================================================== 00:19:46.345 Total : 2308.49 577.12 115250.19 48047.15 373689.24 00:19:46.345 00:19:46.346 13:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:19:46.346 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.603 No valid NVMe controllers or AIO or URING devices found 00:19:46.603 Initializing NVMe Controllers 00:19:46.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:46.603 Controller IO queue size 128, less than required. 00:19:46.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.603 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:46.603 Controller IO queue size 128, less than required. 00:19:46.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:46.603 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:19:46.603 WARNING: Some requested NVMe devices were skipped 00:19:46.603 13:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:19:46.603 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.132 Initializing NVMe Controllers 00:19:49.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.132 Controller IO queue size 128, less than required. 00:19:49.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:49.132 Controller IO queue size 128, less than required. 00:19:49.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:49.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:49.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:49.132 Initialization complete. Launching workers. 00:19:49.132 00:19:49.132 ==================== 00:19:49.132 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:49.132 TCP transport: 00:19:49.132 polls: 10107 00:19:49.132 idle_polls: 6417 00:19:49.132 sock_completions: 3690 00:19:49.132 nvme_completions: 5705 00:19:49.132 submitted_requests: 8396 00:19:49.132 queued_requests: 1 00:19:49.132 00:19:49.132 ==================== 00:19:49.132 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:49.132 TCP transport: 00:19:49.132 polls: 10109 00:19:49.132 idle_polls: 6783 00:19:49.132 sock_completions: 3326 00:19:49.132 nvme_completions: 6183 00:19:49.132 submitted_requests: 9308 00:19:49.132 queued_requests: 1 00:19:49.132 ======================================================== 00:19:49.132 Latency(us) 00:19:49.132 Device Information : IOPS MiB/s Average min max 00:19:49.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1424.48 356.12 92238.70 64711.50 155671.61 00:19:49.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1543.86 385.96 83486.73 40426.48 136143.40 00:19:49.132 ======================================================== 00:19:49.132 Total : 2968.34 742.09 87686.73 40426.48 155671.61 00:19:49.132 00:19:49.132 13:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:19:49.132 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:49.391 rmmod nvme_tcp 00:19:49.391 rmmod nvme_fabrics 00:19:49.391 rmmod nvme_keyring 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 616345 ']' 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 616345 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 616345 ']' 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 616345 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 616345 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 616345' 00:19:49.391 killing process with pid 616345 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 616345 00:19:49.391 13:48:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 616345 00:19:51.291 13:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.291 13:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.291 13:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.291 13:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.291 13:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.291 13:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.291 13:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.291 13:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.190 13:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:53.190 00:19:53.190 real 0m20.598s 00:19:53.190 user 1m2.940s 00:19:53.190 sys 0m5.225s 00:19:53.190 13:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:53.190 13:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:53.190 ************************************ 00:19:53.190 END TEST nvmf_perf 00:19:53.190 ************************************ 00:19:53.190 13:48:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:53.190 13:48:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:53.190 13:48:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:53.190 13:48:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.190 ************************************ 00:19:53.190 START TEST nvmf_fio_host 00:19:53.191 ************************************ 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:53.191 * Looking for test storage... 00:19:53.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:19:53.191 13:48:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:55.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:55.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.720 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:55.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:55.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:19:55.721 00:19:55.721 --- 10.0.0.2 ping statistics --- 00:19:55.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.721 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:19:55.721 00:19:55.721 --- 10.0.0.1 ping statistics --- 00:19:55.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.721 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=620187 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 620187 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 620187 ']' 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.721 13:48:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.721 [2024-07-25 13:48:52.481853] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:55.721 [2024-07-25 13:48:52.481923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.721 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.721 [2024-07-25 13:48:52.547672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.721 [2024-07-25 13:48:52.654981] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.721 [2024-07-25 13:48:52.655028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.721 [2024-07-25 13:48:52.655064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.721 [2024-07-25 13:48:52.655077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.721 [2024-07-25 13:48:52.655086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.721 [2024-07-25 13:48:52.655205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.721 [2024-07-25 13:48:52.655229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.721 [2024-07-25 13:48:52.655287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.721 [2024-07-25 13:48:52.655290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.654 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.654 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:19:56.654 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:56.654 [2024-07-25 13:48:53.682450] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.911 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:56.911 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:56.911 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.911 13:48:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:57.169 Malloc1 00:19:57.169 13:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:57.426 13:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:57.683 13:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.940 [2024-07-25 13:48:54.768362] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.940 13:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:19:58.198 13:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:58.456 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:58.456 fio-3.35 00:19:58.456 Starting 1 thread 00:19:58.456 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.982 00:20:00.982 test: (groupid=0, jobs=1): err= 0: pid=620671: Thu Jul 25 13:48:57 2024 00:20:00.982 read: IOPS=8964, BW=35.0MiB/s (36.7MB/s)(70.3MiB/2007msec) 00:20:00.982 slat (usec): min=2, max=114, avg= 2.57, stdev= 1.65 00:20:00.982 clat (usec): min=2253, max=13417, avg=7795.44, stdev=634.58 00:20:00.982 lat (usec): min=2276, max=13420, avg=7798.01, stdev=634.50 00:20:00.982 clat percentiles (usec): 00:20:00.982 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7308], 00:20:00.982 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:20:00.982 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:20:00.982 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11076], 99.95th=[12387], 00:20:00.982 | 99.99th=[13435] 00:20:00.982 bw ( KiB/s): min=34872, max=36344, per=99.98%, avg=35850.00, stdev=662.93, samples=4 00:20:00.982 iops : min= 8718, max= 9086, avg=8962.50, stdev=165.73, samples=4 00:20:00.982 write: IOPS=8982, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2007msec); 0 zone resets 00:20:00.982 slat (usec): min=2, max=106, avg= 2.68, stdev= 1.53 00:20:00.982 clat (usec): min=1025, max=12610, avg=6422.15, stdev=541.63 00:20:00.982 lat (usec): min=1031, max=12612, avg=6424.83, stdev=541.61 00:20:00.982 clat percentiles (usec): 00:20:00.982 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5997], 00:20:00.982 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:20:00.982 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7177], 00:20:00.982 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[11076], 99.95th=[11469], 00:20:00.982 | 99.99th=[12387] 00:20:00.982 bw ( KiB/s): min=35584, max=36208, per=100.00%, avg=35934.00, stdev=300.64, samples=4 00:20:00.982 iops : min= 8896, max= 9052, avg=8983.50, stdev=75.16, samples=4 00:20:00.982 lat (msec) : 2=0.02%, 4=0.11%, 10=99.68%, 20=0.18% 00:20:00.982 cpu : usr=65.90%, sys=32.40%, ctx=90, majf=0, minf=40 00:20:00.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:00.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:00.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:00.982 issued rwts: total=17991,18028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:00.982 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:00.982 00:20:00.982 Run status group 0 (all jobs): 00:20:00.982 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.3MiB (73.7MB), run=2007-2007msec 00:20:00.982 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2007-2007msec 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:00.982 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:00.983 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:00.983 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:00.983 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:00.983 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:00.983 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:00.983 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:00.983 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:00.983 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:00.983 13:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:00.983 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:00.983 fio-3.35 00:20:00.983 Starting 1 thread 00:20:00.983 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.509 00:20:03.509 test: (groupid=0, jobs=1): err= 0: pid=621005: Thu Jul 25 13:49:00 2024 00:20:03.509 read: IOPS=8088, BW=126MiB/s (133MB/s)(254MiB/2008msec) 00:20:03.509 slat (nsec): min=2873, max=93234, avg=3861.12, stdev=1977.83 00:20:03.509 clat (usec): min=2598, max=16613, avg=8816.95, stdev=1972.34 00:20:03.509 lat (usec): min=2601, max=16617, avg=8820.81, stdev=1972.41 00:20:03.509 clat percentiles (usec): 00:20:03.509 | 1.00th=[ 4883], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7177], 00:20:03.509 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9241], 00:20:03.509 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11338], 95.00th=[12256], 00:20:03.509 | 99.00th=[14222], 99.50th=[14746], 99.90th=[16319], 99.95th=[16450], 00:20:03.509 | 99.99th=[16581] 00:20:03.509 bw ( KiB/s): min=62784, max=80576, per=53.79%, avg=69608.00, stdev=8165.10, samples=4 00:20:03.509 iops : min= 3924, max= 5036, avg=4350.50, stdev=510.32, samples=4 00:20:03.509 write: IOPS=4872, BW=76.1MiB/s (79.8MB/s)(142MiB/1863msec); 0 zone resets 00:20:03.509 slat (usec): min=30, max=194, avg=34.95, stdev= 6.54 00:20:03.509 clat (usec): min=4950, max=19813, avg=11831.72, stdev=1963.79 00:20:03.509 lat (usec): min=4981, max=19846, avg=11866.67, stdev=1964.10 00:20:03.509 clat percentiles (usec): 00:20:03.509 | 1.00th=[ 7701], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:20:03.509 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12125], 00:20:03.509 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14353], 95.00th=[15401], 00:20:03.509 | 99.00th=[16909], 99.50th=[17957], 99.90th=[19268], 99.95th=[19530], 00:20:03.509 | 99.99th=[19792] 00:20:03.509 bw ( KiB/s): min=65056, max=82304, per=92.39%, avg=72032.00, stdev=7797.54, samples=4 00:20:03.509 iops : min= 4066, max= 5144, avg=4502.00, stdev=487.35, samples=4 00:20:03.509 lat (msec) : 4=0.19%, 10=54.07%, 20=45.73% 00:20:03.509 cpu : usr=75.19%, sys=23.47%, ctx=35, majf=0, minf=72 00:20:03.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:03.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:03.509 issued rwts: total=16242,9078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:03.509 00:20:03.509 Run status group 0 (all jobs): 00:20:03.509 READ: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=254MiB (266MB), run=2008-2008msec 00:20:03.509 WRITE: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=142MiB (149MB), run=1863-1863msec 00:20:03.509 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:03.767 rmmod nvme_tcp 00:20:03.767 rmmod nvme_fabrics 00:20:03.767 rmmod nvme_keyring 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 620187 ']' 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 620187 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 620187 ']' 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 620187 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 620187 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 620187' 00:20:03.767 killing process with pid 620187 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 620187 00:20:03.767 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 620187 00:20:04.027 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.027 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.027 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.027 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.027 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.027 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.027 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.027 13:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.564 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:06.564 00:20:06.564 real 0m12.893s 00:20:06.564 user 0m38.343s 00:20:06.564 sys 0m4.393s 00:20:06.564 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:06.564 13:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.564 ************************************ 00:20:06.564 END TEST nvmf_fio_host 00:20:06.564 ************************************ 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.564 ************************************ 00:20:06.564 START TEST nvmf_failover 00:20:06.564 ************************************ 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:06.564 * Looking for test storage... 00:20:06.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:06.564 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.565 13:49:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:08.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:08.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:08.468 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:08.468 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.468 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:20:08.469 00:20:08.469 --- 10.0.0.2 ping statistics --- 00:20:08.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.469 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:20:08.469 00:20:08.469 --- 10.0.0.1 ping statistics --- 00:20:08.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.469 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=623196 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 623196 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 623196 ']' 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.469 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:08.469 [2024-07-25 13:49:05.333716] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:08.469 [2024-07-25 13:49:05.333812] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.469 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.469 [2024-07-25 13:49:05.399155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:08.727 [2024-07-25 13:49:05.511760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.727 [2024-07-25 13:49:05.511806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.727 [2024-07-25 13:49:05.511819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.727 [2024-07-25 13:49:05.511831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.727 [2024-07-25 13:49:05.511841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.727 [2024-07-25 13:49:05.511993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.727 [2024-07-25 13:49:05.512067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.727 [2024-07-25 13:49:05.512071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.727 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.727 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:08.727 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.727 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.727 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:08.727 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.727 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:08.993 [2024-07-25 13:49:05.872747] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.993 13:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:09.316 Malloc0 00:20:09.316 13:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:09.574 13:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:09.831 13:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.089 [2024-07-25 13:49:06.896298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.089 13:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:10.348 [2024-07-25 13:49:07.144998] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:10.348 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:10.606 [2024-07-25 13:49:07.397845] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=623489 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 623489 /var/tmp/bdevperf.sock 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 623489 ']' 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.606 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:10.864 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.864 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:10.864 13:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:11.122 NVMe0n1 00:20:11.380 13:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:11.637 00:20:11.637 13:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=623621 00:20:11.637 13:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:11.637 13:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.569 13:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.827 [2024-07-25 13:49:09.715551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.827 [2024-07-25 13:49:09.715780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.715992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 [2024-07-25 13:49:09.716294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf40 is same with the state(5) to be set 00:20:12.828 13:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:16.109 13:49:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:16.367 00:20:16.367 13:49:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:16.625 [2024-07-25 13:49:13.489976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd10 is same with the state(5) to be set 00:20:16.625 [2024-07-25 13:49:13.490026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd10 is same with the state(5) to be set 00:20:16.625 [2024-07-25 13:49:13.490078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd10 is same with the state(5) to be set 00:20:16.625 [2024-07-25 13:49:13.490092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd10 is same with the state(5) to be set 00:20:16.625 [2024-07-25 13:49:13.490114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd10 is same with the state(5) to be set 00:20:16.625 [2024-07-25 13:49:13.490126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd10 is same with the state(5) to be set 00:20:16.625 [2024-07-25 13:49:13.490137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd10 is same with the state(5) to be set 00:20:16.625 [2024-07-25 13:49:13.490149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cd10 is same with the state(5) to be set 00:20:16.625 13:49:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:19.906 13:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.906 [2024-07-25 13:49:16.734399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.906 13:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:20.840 13:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:21.098 [2024-07-25 13:49:18.005870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.005921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.005943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.005955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.005966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.005978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.005989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.098 [2024-07-25 13:49:18.006192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 [2024-07-25 13:49:18.006610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dab0 is same with the state(5) to be set 00:20:21.099 13:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 623621 00:20:27.665 0 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 623489 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 623489 ']' 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 623489 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 623489 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 623489' 00:20:27.665 killing process with pid 623489 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 623489 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 623489 00:20:27.665 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:27.665 [2024-07-25 13:49:07.456802] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:27.665 [2024-07-25 13:49:07.456900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623489 ] 00:20:27.665 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.665 [2024-07-25 13:49:07.516401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.665 [2024-07-25 13:49:07.624023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.665 Running I/O for 15 seconds... 00:20:27.665 [2024-07-25 13:49:09.717958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.665 [2024-07-25 13:49:09.718536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.665 [2024-07-25 13:49:09.718566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.665 [2024-07-25 13:49:09.718596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.665 [2024-07-25 13:49:09.718612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.666 [2024-07-25 13:49:09.718807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.718971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.718985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.666 [2024-07-25 13:49:09.719731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.666 [2024-07-25 13:49:09.719746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.719760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.719775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.719790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.719805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.719819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.719834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.719849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.719864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.719879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.719895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.719909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.719924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.719938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.719954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.719972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.719987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.667 [2024-07-25 13:49:09.720856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.667 [2024-07-25 13:49:09.720885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.667 [2024-07-25 13:49:09.720900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.668 [2024-07-25 13:49:09.720916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.720931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.668 [2024-07-25 13:49:09.720945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.720961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.668 [2024-07-25 13:49:09.720974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.720989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.668 [2024-07-25 13:49:09.721003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.668 [2024-07-25 13:49:09.721033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.668 [2024-07-25 13:49:09.721085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.668 [2024-07-25 13:49:09.721508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.668 [2024-07-25 13:49:09.721561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:20:27.668 [2024-07-25 13:49:09.721575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.668 [2024-07-25 13:49:09.721674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.668 [2024-07-25 13:49:09.721709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.668 [2024-07-25 13:49:09.721744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.668 [2024-07-25 13:49:09.721772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.721786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fc0f0 is same with the state(5) to be set 00:20:27.668 [2024-07-25 13:49:09.722037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.668 [2024-07-25 13:49:09.722080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.668 [2024-07-25 13:49:09.722097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79288 len:8 PRP1 0x0 PRP2 0x0 00:20:27.668 [2024-07-25 13:49:09.722111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.722129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.668 [2024-07-25 13:49:09.722142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.668 [2024-07-25 13:49:09.722154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79296 len:8 PRP1 0x0 PRP2 0x0 00:20:27.668 [2024-07-25 13:49:09.722168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.722182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.668 [2024-07-25 13:49:09.722194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.668 [2024-07-25 13:49:09.722205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79304 len:8 PRP1 0x0 PRP2 0x0 00:20:27.668 [2024-07-25 13:49:09.722219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.722232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.668 [2024-07-25 13:49:09.722244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.668 [2024-07-25 13:49:09.722256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79312 len:8 PRP1 0x0 PRP2 0x0 00:20:27.668 [2024-07-25 13:49:09.722270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.722283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.668 [2024-07-25 13:49:09.722295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.668 [2024-07-25 13:49:09.722306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:20:27.668 [2024-07-25 13:49:09.722319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.722334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.668 [2024-07-25 13:49:09.722351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.668 [2024-07-25 13:49:09.722363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:20:27.668 [2024-07-25 13:49:09.722392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.668 [2024-07-25 13:49:09.722413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.668 [2024-07-25 13:49:09.722425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.668 [2024-07-25 13:49:09.722437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:20:27.668 [2024-07-25 13:49:09.722455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79368 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79376 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79384 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79392 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79400 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79408 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.722959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.722973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.722984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.722995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.723008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.723021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.723033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.723066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.723082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.723096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.723108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.723120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.723133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.723146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.723157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.737294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78424 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.737325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.737343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.737373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.737391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.737405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.737419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.737430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.737441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78440 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.737456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.737469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.737480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.737492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.737504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.737518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.737529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.737541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78456 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.737554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.737567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.737578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.737590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78464 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.737603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.737616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.737627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.737639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78472 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.737652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.737665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.669 [2024-07-25 13:49:09.737677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.669 [2024-07-25 13:49:09.737688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78480 len:8 PRP1 0x0 PRP2 0x0 00:20:27.669 [2024-07-25 13:49:09.737701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.669 [2024-07-25 13:49:09.737714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.737725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.737737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78488 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.737749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.737766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.737777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.737788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78496 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.737802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.737815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.737826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.737837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78504 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.737850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.737864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.737875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.737886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78512 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.737899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.737912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.737923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.737935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78520 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.737947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.737961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.737972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.737982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78528 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.737995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78600 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78608 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78616 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78624 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78632 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78640 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78648 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78656 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78664 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78672 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.670 [2024-07-25 13:49:09.738624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.670 [2024-07-25 13:49:09.738635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.670 [2024-07-25 13:49:09.738646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78680 len:8 PRP1 0x0 PRP2 0x0 00:20:27.670 [2024-07-25 13:49:09.738658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.738672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.738683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.738694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78688 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.738707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.738720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.738731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.738743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78696 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.738756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.738769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.738780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.738792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78704 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.738804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.738818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.738829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.738840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78712 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.738852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.738865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.738877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.738888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78720 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.738901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.738914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.738925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.738936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78728 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.738950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.738963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.738977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.738989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78736 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78744 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78752 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78760 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78768 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78776 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78784 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78792 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78800 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78808 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78816 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78824 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78832 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78840 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.671 [2024-07-25 13:49:09.739758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.671 [2024-07-25 13:49:09.739769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78848 len:8 PRP1 0x0 PRP2 0x0 00:20:27.671 [2024-07-25 13:49:09.739782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.671 [2024-07-25 13:49:09.739795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.739806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.739816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78856 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.739832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.739845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.739856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.739867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78864 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.739880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.739893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.739904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.739915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78872 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.739928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.739941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.739952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.739963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.739980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.739994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79000 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.740823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.740835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.740848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.740861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.753332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.672 [2024-07-25 13:49:09.753375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:20:27.672 [2024-07-25 13:49:09.753399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.672 [2024-07-25 13:49:09.753415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.672 [2024-07-25 13:49:09.753428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.753953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.753967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.753978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.753989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78544 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78552 len:8 PRP1 0x0 PRP2 0x0 00:20:27.673 [2024-07-25 13:49:09.754544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.673 [2024-07-25 13:49:09.754558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.673 [2024-07-25 13:49:09.754569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.673 [2024-07-25 13:49:09.754580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78560 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.754593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.754606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.754617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.754628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78568 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.754641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.754655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.754666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.754679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78576 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.754692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.754706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.754717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.754728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78584 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.754741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.754754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.754765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.754776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78592 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.754789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.754806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.754818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.754829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.754841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.754855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.754866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.754878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.754890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.754903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.754915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.754926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.754939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.754952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.754964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.754975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79200 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.754989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79208 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79216 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79256 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79272 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.674 [2024-07-25 13:49:09.755550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.674 [2024-07-25 13:49:09.755562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:20:27.674 [2024-07-25 13:49:09.755575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:09.755638] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2119c10 was disconnected and freed. reset controller. 00:20:27.674 [2024-07-25 13:49:09.755657] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:27.674 [2024-07-25 13:49:09.755683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:27.674 [2024-07-25 13:49:09.755745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fc0f0 (9): Bad file descriptor 00:20:27.674 [2024-07-25 13:49:09.759027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:27.674 [2024-07-25 13:49:09.836962] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:27.674 [2024-07-25 13:49:13.491075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.674 [2024-07-25 13:49:13.491148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:13.491180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.674 [2024-07-25 13:49:13.491197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.674 [2024-07-25 13:49:13.491216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.491972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.491987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.492003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.492018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.492033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.492057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.492099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.492115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.492131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.492146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.492162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.492177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.492193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.492207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.492223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.675 [2024-07-25 13:49:13.492237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.675 [2024-07-25 13:49:13.492253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.492980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.492995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.493009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.493039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.493093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.493126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.493157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.676 [2024-07-25 13:49:13.493187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.676 [2024-07-25 13:49:13.493224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.676 [2024-07-25 13:49:13.493254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.676 [2024-07-25 13:49:13.493284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.676 [2024-07-25 13:49:13.493315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.676 [2024-07-25 13:49:13.493345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.676 [2024-07-25 13:49:13.493395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.676 [2024-07-25 13:49:13.493411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.677 [2024-07-25 13:49:13.493425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.493978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.493993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.677 [2024-07-25 13:49:13.494443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.677 [2024-07-25 13:49:13.494490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102504 len:8 PRP1 0x0 PRP2 0x0 00:20:27.677 [2024-07-25 13:49:13.494503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.677 [2024-07-25 13:49:13.494542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.677 [2024-07-25 13:49:13.494554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102512 len:8 PRP1 0x0 PRP2 0x0 00:20:27.677 [2024-07-25 13:49:13.494567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.677 [2024-07-25 13:49:13.494580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.677 [2024-07-25 13:49:13.494591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.494603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102520 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.494616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.494629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.494641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.494652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102528 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.494665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.494679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.494690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.494701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102536 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.494714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.494728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.494739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.494750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102544 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.494763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.494777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.494788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.494799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102552 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.494813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.494830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.494842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.494854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102560 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.494867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.494880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.494892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.494903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102568 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.494916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.494930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.494942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.494953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102576 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.494966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.494979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.494991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102584 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102592 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102600 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102608 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102616 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102624 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102632 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102640 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101688 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101696 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101704 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101712 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101720 len:8 PRP1 0x0 PRP2 0x0 00:20:27.678 [2024-07-25 13:49:13.495685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.678 [2024-07-25 13:49:13.495698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.678 [2024-07-25 13:49:13.495710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.678 [2024-07-25 13:49:13.495721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101728 len:8 PRP1 0x0 PRP2 0x0 00:20:27.679 [2024-07-25 13:49:13.495735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:13.495748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.679 [2024-07-25 13:49:13.495760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.679 [2024-07-25 13:49:13.495771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101736 len:8 PRP1 0x0 PRP2 0x0 00:20:27.679 [2024-07-25 13:49:13.495791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:13.495805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.679 [2024-07-25 13:49:13.495817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.679 [2024-07-25 13:49:13.495829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101744 len:8 PRP1 0x0 PRP2 0x0 00:20:27.679 [2024-07-25 13:49:13.495842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:13.495906] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x212ad40 was disconnected and freed. reset controller. 00:20:27.679 [2024-07-25 13:49:13.495925] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:27.679 [2024-07-25 13:49:13.495974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.679 [2024-07-25 13:49:13.495994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:13.496011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.679 [2024-07-25 13:49:13.496025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:13.496040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.679 [2024-07-25 13:49:13.496054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:13.496078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.679 [2024-07-25 13:49:13.496093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:13.496113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:27.679 [2024-07-25 13:49:13.499390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:27.679 [2024-07-25 13:49:13.499431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fc0f0 (9): Bad file descriptor 00:20:27.679 [2024-07-25 13:49:13.569412] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:27.679 [2024-07-25 13:49:18.006840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.006886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.006914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.006929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.006945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.006959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.006974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.006988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.679 [2024-07-25 13:49:18.007660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.679 [2024-07-25 13:49:18.007675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.007969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.007986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.680 [2024-07-25 13:49:18.008690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.680 [2024-07-25 13:49:18.008705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.008735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.008765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.008794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.008824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.008853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.008883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.008912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.008946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.008983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.008997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.681 [2024-07-25 13:49:18.009850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.681 [2024-07-25 13:49:18.009864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.009879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.009902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.009917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.009931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.009946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.009967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.009981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:27.682 [2024-07-25 13:49:18.010784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.682 [2024-07-25 13:49:18.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34216 len:8 PRP1 0x0 PRP2 0x0 00:20:27.682 [2024-07-25 13:49:18.010857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.682 [2024-07-25 13:49:18.010887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.682 [2024-07-25 13:49:18.010898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34224 len:8 PRP1 0x0 PRP2 0x0 00:20:27.682 [2024-07-25 13:49:18.010911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.682 [2024-07-25 13:49:18.010936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.682 [2024-07-25 13:49:18.010947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34232 len:8 PRP1 0x0 PRP2 0x0 00:20:27.682 [2024-07-25 13:49:18.010960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.010974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.682 [2024-07-25 13:49:18.010985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.682 [2024-07-25 13:49:18.010996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 PRP1 0x0 PRP2 0x0 00:20:27.682 [2024-07-25 13:49:18.011014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.682 [2024-07-25 13:49:18.011028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.682 [2024-07-25 13:49:18.011039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.682 [2024-07-25 13:49:18.011090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34248 len:8 PRP1 0x0 PRP2 0x0 00:20:27.683 [2024-07-25 13:49:18.011106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.683 [2024-07-25 13:49:18.011122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.683 [2024-07-25 13:49:18.011134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.683 [2024-07-25 13:49:18.011146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34256 len:8 PRP1 0x0 PRP2 0x0 00:20:27.683 [2024-07-25 13:49:18.011160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.683 [2024-07-25 13:49:18.011174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:27.683 [2024-07-25 13:49:18.011186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:27.683 [2024-07-25 13:49:18.011197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33688 len:8 PRP1 0x0 PRP2 0x0 00:20:27.683 [2024-07-25 13:49:18.011211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.683 [2024-07-25 13:49:18.011270] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x212cb40 was disconnected and freed. reset controller. 00:20:27.683 [2024-07-25 13:49:18.011290] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:27.683 [2024-07-25 13:49:18.011325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.683 [2024-07-25 13:49:18.011343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.683 [2024-07-25 13:49:18.011370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.683 [2024-07-25 13:49:18.011384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.683 [2024-07-25 13:49:18.011400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.683 [2024-07-25 13:49:18.011413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.683 [2024-07-25 13:49:18.011428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.683 [2024-07-25 13:49:18.011452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.683 [2024-07-25 13:49:18.011466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:27.683 [2024-07-25 13:49:18.011520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fc0f0 (9): Bad file descriptor 00:20:27.683 [2024-07-25 13:49:18.014826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:27.683 [2024-07-25 13:49:18.048766] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:27.683 00:20:27.683 Latency(us) 00:20:27.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.683 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:27.683 Verification LBA range: start 0x0 length 0x4000 00:20:27.683 NVMe0n1 : 15.00 8648.35 33.78 468.98 0.00 14010.55 552.20 47185.92 00:20:27.683 =================================================================================================================== 00:20:27.683 Total : 8648.35 33.78 468.98 0.00 14010.55 552.20 47185.92 00:20:27.683 Received shutdown signal, test time was about 15.000000 seconds 00:20:27.683 00:20:27.683 Latency(us) 00:20:27.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.683 =================================================================================================================== 00:20:27.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=625467 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 625467 /var/tmp/bdevperf.sock 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 625467 ']' 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.683 13:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:27.683 13:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.683 13:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:20:27.683 13:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:27.683 [2024-07-25 13:49:24.497492] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:27.683 13:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:27.940 [2024-07-25 13:49:24.762194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:27.941 13:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:28.198 NVMe0n1 00:20:28.198 13:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:28.455 00:20:28.455 13:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:29.018 00:20:29.018 13:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:29.018 13:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:29.274 13:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:29.530 13:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:32.802 13:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:32.802 13:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:32.802 13:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=626129 00:20:32.802 13:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 626129 00:20:32.802 13:49:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.738 0 00:20:33.738 13:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:33.738 [2024-07-25 13:49:23.988055] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:33.738 [2024-07-25 13:49:23.988154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625467 ] 00:20:33.738 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.738 [2024-07-25 13:49:24.046956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.738 [2024-07-25 13:49:24.151422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.738 [2024-07-25 13:49:26.324151] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:33.738 [2024-07-25 13:49:26.324218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.738 [2024-07-25 13:49:26.324242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.738 [2024-07-25 13:49:26.324259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.738 [2024-07-25 13:49:26.324274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.738 [2024-07-25 13:49:26.324290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.738 [2024-07-25 13:49:26.324304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.738 [2024-07-25 13:49:26.324319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.738 [2024-07-25 13:49:26.324334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.738 [2024-07-25 13:49:26.324360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:33.738 [2024-07-25 13:49:26.324404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:33.738 [2024-07-25 13:49:26.324437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc40f0 (9): Bad file descriptor 00:20:33.738 [2024-07-25 13:49:26.385259] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:33.738 Running I/O for 1 seconds... 00:20:33.738 00:20:33.738 Latency(us) 00:20:33.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.738 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.738 Verification LBA range: start 0x0 length 0x4000 00:20:33.738 NVMe0n1 : 1.01 8822.51 34.46 0.00 0.00 14442.15 867.75 14757.74 00:20:33.738 =================================================================================================================== 00:20:33.738 Total : 8822.51 34.46 0.00 0.00 14442.15 867.75 14757.74 00:20:33.738 13:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:33.738 13:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:34.331 13:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:34.331 13:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:34.331 13:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:34.589 13:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:34.845 13:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:38.120 13:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:38.120 13:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 625467 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 625467 ']' 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 625467 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 625467 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 625467' 00:20:38.120 killing process with pid 625467 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 625467 00:20:38.120 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 625467 00:20:38.380 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:38.380 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.946 rmmod nvme_tcp 00:20:38.946 rmmod nvme_fabrics 00:20:38.946 rmmod nvme_keyring 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 623196 ']' 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 623196 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 623196 ']' 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 623196 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 623196 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 623196' 00:20:38.946 killing process with pid 623196 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 623196 00:20:38.946 13:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 623196 00:20:39.207 13:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.207 13:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:39.207 13:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:39.207 13:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.207 13:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.207 13:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.207 13:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.207 13:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.110 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.110 00:20:41.110 real 0m35.046s 00:20:41.110 user 2m2.685s 00:20:41.110 sys 0m6.153s 00:20:41.110 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:41.110 13:49:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:41.110 ************************************ 00:20:41.110 END TEST nvmf_failover 00:20:41.110 ************************************ 00:20:41.110 13:49:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:41.110 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:41.110 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:41.110 13:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.111 ************************************ 00:20:41.111 START TEST nvmf_host_discovery 00:20:41.111 ************************************ 00:20:41.111 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:41.369 * Looking for test storage... 00:20:41.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:20:41.369 13:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:43.902 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:43.902 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:43.902 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:43.902 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.902 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:43.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:20:43.903 00:20:43.903 --- 10.0.0.2 ping statistics --- 00:20:43.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.903 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:20:43.903 00:20:43.903 --- 10.0.0.1 ping statistics --- 00:20:43.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.903 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=628746 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 628746 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 628746 ']' 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 [2024-07-25 13:49:40.553462] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:43.903 [2024-07-25 13:49:40.553552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.903 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.903 [2024-07-25 13:49:40.620950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.903 [2024-07-25 13:49:40.732941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.903 [2024-07-25 13:49:40.732996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.903 [2024-07-25 13:49:40.733023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.903 [2024-07-25 13:49:40.733034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.903 [2024-07-25 13:49:40.733043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.903 [2024-07-25 13:49:40.733089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 [2024-07-25 13:49:40.859863] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 [2024-07-25 13:49:40.868075] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 null0 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 null1 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=628881 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 628881 /tmp/host.sock 00:20:43.903 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 628881 ']' 00:20:43.904 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:43.904 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.904 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:43.904 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:43.904 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.904 13:49:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.163 [2024-07-25 13:49:40.939441] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:44.163 [2024-07-25 13:49:40.939523] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628881 ] 00:20:44.163 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.163 [2024-07-25 13:49:40.995822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.163 [2024-07-25 13:49:41.100415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:44.420 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:44.421 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.679 [2024-07-25 13:49:41.505791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:44.679 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:20:44.680 13:49:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:20:45.245 [2024-07-25 13:49:42.237105] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:45.245 [2024-07-25 13:49:42.237142] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:45.245 [2024-07-25 13:49:42.237166] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:45.503 [2024-07-25 13:49:42.323463] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:45.503 [2024-07-25 13:49:42.421106] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:45.503 [2024-07-25 13:49:42.421130] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:45.761 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:45.762 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:46.020 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:46.021 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:46.021 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:46.021 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:46.021 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.021 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.021 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:46.021 13:49:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.021 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.279 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.279 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.280 [2024-07-25 13:49:43.098577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:46.280 [2024-07-25 13:49:43.098889] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:46.280 [2024-07-25 13:49:43.098925] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.280 [2024-07-25 13:49:43.187638] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:20:46.280 13:49:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:20:46.539 [2024-07-25 13:49:43.498189] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:46.539 [2024-07-25 13:49:43.498211] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:46.539 [2024-07-25 13:49:43.498220] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.473 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.473 [2024-07-25 13:49:44.338800] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:47.473 [2024-07-25 13:49:44.338846] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:47.473 [2024-07-25 13:49:44.339993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.473 [2024-07-25 13:49:44.340029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.473 [2024-07-25 13:49:44.340047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.473 [2024-07-25 13:49:44.340068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.473 [2024-07-25 13:49:44.340084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.473 [2024-07-25 13:49:44.340109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.473 [2024-07-25 13:49:44.340129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.473 [2024-07-25 13:49:44.340143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.474 [2024-07-25 13:49:44.340157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc08c20 is same with the state(5) to be set 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:47.474 [2024-07-25 13:49:44.349994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08c20 (9): Bad file descriptor 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.474 [2024-07-25 13:49:44.360038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:47.474 [2024-07-25 13:49:44.360270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.474 [2024-07-25 13:49:44.360301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc08c20 with addr=10.0.0.2, port=4420 00:20:47.474 [2024-07-25 13:49:44.360318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc08c20 is same with the state(5) to be set 00:20:47.474 [2024-07-25 13:49:44.360342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08c20 (9): Bad file descriptor 00:20:47.474 [2024-07-25 13:49:44.360364] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:47.474 [2024-07-25 13:49:44.360378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:47.474 [2024-07-25 13:49:44.360395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:47.474 [2024-07-25 13:49:44.360415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.474 [2024-07-25 13:49:44.370150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:47.474 [2024-07-25 13:49:44.370341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.474 [2024-07-25 13:49:44.370369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc08c20 with addr=10.0.0.2, port=4420 00:20:47.474 [2024-07-25 13:49:44.370385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc08c20 is same with the state(5) to be set 00:20:47.474 [2024-07-25 13:49:44.370407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08c20 (9): Bad file descriptor 00:20:47.474 [2024-07-25 13:49:44.370446] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:47.474 [2024-07-25 13:49:44.370463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:47.474 [2024-07-25 13:49:44.370477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:47.474 [2024-07-25 13:49:44.370496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.474 [2024-07-25 13:49:44.380222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:47.474 [2024-07-25 13:49:44.380362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.474 [2024-07-25 13:49:44.380390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc08c20 with addr=10.0.0.2, port=4420 00:20:47.474 [2024-07-25 13:49:44.380406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc08c20 is same with the state(5) to be set 00:20:47.474 [2024-07-25 13:49:44.380429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08c20 (9): Bad file descriptor 00:20:47.474 [2024-07-25 13:49:44.380449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:47.474 [2024-07-25 13:49:44.380463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:47.474 [2024-07-25 13:49:44.380476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:47.474 [2024-07-25 13:49:44.380495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:47.474 [2024-07-25 13:49:44.390295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:47.474 [2024-07-25 13:49:44.390493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.474 [2024-07-25 13:49:44.390520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc08c20 with addr=10.0.0.2, port=4420 00:20:47.474 [2024-07-25 13:49:44.390536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc08c20 is same with the state(5) to be set 00:20:47.474 [2024-07-25 13:49:44.390558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08c20 (9): Bad file descriptor 00:20:47.474 [2024-07-25 13:49:44.390591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:47.474 [2024-07-25 13:49:44.390608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:47.474 [2024-07-25 13:49:44.390621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:47.474 [2024-07-25 13:49:44.390653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:47.474 [2024-07-25 13:49:44.400371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:47.474 [2024-07-25 13:49:44.400548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.474 [2024-07-25 13:49:44.400576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc08c20 with addr=10.0.0.2, port=4420 00:20:47.474 [2024-07-25 13:49:44.400593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc08c20 is same with the state(5) to be set 00:20:47.474 [2024-07-25 13:49:44.400627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08c20 (9): Bad file descriptor 00:20:47.474 [2024-07-25 13:49:44.400663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:47.474 [2024-07-25 13:49:44.400681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:47.474 [2024-07-25 13:49:44.400694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:47.474 [2024-07-25 13:49:44.400714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.474 [2024-07-25 13:49:44.410444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:47.474 [2024-07-25 13:49:44.410684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.474 [2024-07-25 13:49:44.410711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc08c20 with addr=10.0.0.2, port=4420 00:20:47.474 [2024-07-25 13:49:44.410727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc08c20 is same with the state(5) to be set 00:20:47.474 [2024-07-25 13:49:44.410761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08c20 (9): Bad file descriptor 00:20:47.474 [2024-07-25 13:49:44.410807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:47.474 [2024-07-25 13:49:44.410826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:47.474 [2024-07-25 13:49:44.410840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:47.474 [2024-07-25 13:49:44.410858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.474 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.474 [2024-07-25 13:49:44.420510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:47.474 [2024-07-25 13:49:44.420647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.475 [2024-07-25 13:49:44.420690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc08c20 with addr=10.0.0.2, port=4420 00:20:47.475 [2024-07-25 13:49:44.420706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc08c20 is same with the state(5) to be set 00:20:47.475 [2024-07-25 13:49:44.420740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc08c20 (9): Bad file descriptor 00:20:47.475 [2024-07-25 13:49:44.420763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:47.475 [2024-07-25 13:49:44.420777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:47.475 [2024-07-25 13:49:44.420791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:47.475 [2024-07-25 13:49:44.420810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.475 [2024-07-25 13:49:44.425605] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:47.475 [2024-07-25 13:49:44.425632] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.475 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:47.733 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.734 13:49:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.106 [2024-07-25 13:49:45.726723] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:49.106 [2024-07-25 13:49:45.726748] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:49.106 [2024-07-25 13:49:45.726771] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:49.106 [2024-07-25 13:49:45.853180] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:20:49.106 [2024-07-25 13:49:45.962487] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:49.106 [2024-07-25 13:49:45.962530] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.106 request: 00:20:49.106 { 00:20:49.106 "name": "nvme", 00:20:49.106 "trtype": "tcp", 00:20:49.106 "traddr": "10.0.0.2", 00:20:49.106 "adrfam": "ipv4", 00:20:49.106 "trsvcid": "8009", 00:20:49.106 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:49.106 "wait_for_attach": true, 00:20:49.106 "method": "bdev_nvme_start_discovery", 00:20:49.106 "req_id": 1 00:20:49.106 } 00:20:49.106 Got JSON-RPC error response 00:20:49.106 response: 00:20:49.106 { 00:20:49.106 "code": -17, 00:20:49.106 "message": "File exists" 00:20:49.106 } 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:49.106 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:49.107 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:49.107 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:49.107 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:49.107 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.107 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.107 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:49.107 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:49.107 13:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.107 request: 00:20:49.107 { 00:20:49.107 "name": "nvme_second", 00:20:49.107 "trtype": "tcp", 00:20:49.107 "traddr": "10.0.0.2", 00:20:49.107 "adrfam": "ipv4", 00:20:49.107 "trsvcid": "8009", 00:20:49.107 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:49.107 "wait_for_attach": true, 00:20:49.107 "method": "bdev_nvme_start_discovery", 00:20:49.107 "req_id": 1 00:20:49.107 } 00:20:49.107 Got JSON-RPC error response 00:20:49.107 response: 00:20:49.107 { 00:20:49.107 "code": -17, 00:20:49.107 "message": "File exists" 00:20:49.107 } 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:49.107 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:49.388 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.388 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:49.388 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:49.388 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:49.388 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:49.389 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:49.389 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:49.389 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:49.389 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:49.389 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:49.389 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.389 13:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.321 [2024-07-25 13:49:47.169988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:50.321 [2024-07-25 13:49:47.170050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c030 with addr=10.0.0.2, port=8010 00:20:50.321 [2024-07-25 13:49:47.170092] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:50.321 [2024-07-25 13:49:47.170109] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:50.321 [2024-07-25 13:49:47.170122] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:51.252 [2024-07-25 13:49:48.172251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.252 [2024-07-25 13:49:48.172285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0c030 with addr=10.0.0.2, port=8010 00:20:51.252 [2024-07-25 13:49:48.172306] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:51.252 [2024-07-25 13:49:48.172319] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:51.252 [2024-07-25 13:49:48.172332] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:52.188 [2024-07-25 13:49:49.174549] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:20:52.188 request: 00:20:52.188 { 00:20:52.188 "name": "nvme_second", 00:20:52.188 "trtype": "tcp", 00:20:52.188 "traddr": "10.0.0.2", 00:20:52.188 "adrfam": "ipv4", 00:20:52.188 "trsvcid": "8010", 00:20:52.188 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:52.188 "wait_for_attach": false, 00:20:52.188 "attach_timeout_ms": 3000, 00:20:52.188 "method": "bdev_nvme_start_discovery", 00:20:52.188 "req_id": 1 00:20:52.188 } 00:20:52.188 Got JSON-RPC error response 00:20:52.188 response: 00:20:52.188 { 00:20:52.188 "code": -110, 00:20:52.188 "message": "Connection timed out" 00:20:52.188 } 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 628881 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.188 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.446 rmmod nvme_tcp 00:20:52.446 rmmod nvme_fabrics 00:20:52.446 rmmod nvme_keyring 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 628746 ']' 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 628746 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 628746 ']' 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 628746 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 628746 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 628746' 00:20:52.446 killing process with pid 628746 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 628746 00:20:52.446 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 628746 00:20:52.706 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:52.706 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:52.706 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:52.706 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.706 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:52.706 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.706 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.706 13:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.612 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.612 00:20:54.612 real 0m13.446s 00:20:54.612 user 0m19.342s 00:20:54.612 sys 0m2.925s 00:20:54.612 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:54.612 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.612 ************************************ 00:20:54.612 END TEST nvmf_host_discovery 00:20:54.612 ************************************ 00:20:54.612 13:49:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:54.612 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:54.612 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:54.612 13:49:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.612 ************************************ 00:20:54.612 START TEST nvmf_host_multipath_status 00:20:54.612 ************************************ 00:20:54.612 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:54.872 * Looking for test storage... 00:20:54.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.872 13:49:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:56.774 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.774 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:56.775 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:56.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:56.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:56.775 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.033 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.033 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.033 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:57.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:20:57.033 00:20:57.033 --- 10.0.0.2 ping statistics --- 00:20:57.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.033 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:57.033 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:20:57.033 00:20:57.033 --- 10.0.0.1 ping statistics --- 00:20:57.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.033 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:57.033 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=631916 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 631916 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 631916 ']' 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.034 13:49:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:57.034 [2024-07-25 13:49:53.922546] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:57.034 [2024-07-25 13:49:53.922612] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.034 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.034 [2024-07-25 13:49:53.983726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:57.292 [2024-07-25 13:49:54.086471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.292 [2024-07-25 13:49:54.086512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.292 [2024-07-25 13:49:54.086540] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.292 [2024-07-25 13:49:54.086551] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.292 [2024-07-25 13:49:54.086560] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.292 [2024-07-25 13:49:54.086690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.292 [2024-07-25 13:49:54.086695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.292 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:57.292 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:20:57.292 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:57.292 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:57.292 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:57.292 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.292 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=631916 00:20:57.292 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:57.550 [2024-07-25 13:49:54.459532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.550 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:57.808 Malloc0 00:20:57.808 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:58.066 13:49:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.324 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.582 [2024-07-25 13:49:55.470690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.582 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:58.840 [2024-07-25 13:49:55.731379] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=632199 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 632199 /var/tmp/bdevperf.sock 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 632199 ']' 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.840 13:49:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:59.098 13:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.098 13:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:20:59.098 13:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:59.355 13:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:59.920 Nvme0n1 00:20:59.920 13:49:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:00.485 Nvme0n1 00:21:00.485 13:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:00.485 13:49:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:02.386 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:02.386 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:02.644 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:02.902 13:49:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:04.275 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:04.275 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:04.275 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.275 13:50:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:04.275 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.275 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:04.275 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.275 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:04.532 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:04.532 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:04.532 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.532 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:04.789 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.789 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:04.790 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.790 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:05.048 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.048 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:05.048 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.048 13:50:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:05.307 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.307 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:05.307 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.307 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:05.565 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.565 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:05.565 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:05.824 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:06.082 13:50:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:07.015 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:07.015 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:07.015 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.015 13:50:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:07.273 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:07.273 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:07.273 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.273 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:07.531 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.532 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:07.532 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.532 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:07.790 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.790 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:07.790 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.790 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:08.048 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.048 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:08.048 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.048 13:50:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:08.307 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.307 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:08.307 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.307 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:08.565 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.565 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:08.565 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:08.822 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:09.080 13:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:10.012 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:10.012 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:10.012 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.012 13:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:10.270 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.270 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:10.270 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.270 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:10.528 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:10.528 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:10.528 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.528 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:10.786 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.786 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:10.786 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.786 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:11.044 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.044 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:11.044 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.044 13:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:11.302 13:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.302 13:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:11.302 13:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.302 13:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:11.560 13:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.560 13:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:11.560 13:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:11.818 13:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:12.075 13:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:13.021 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:13.021 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:13.021 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.021 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:13.301 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.301 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:13.301 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.301 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:13.559 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:13.559 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:13.559 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.559 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:13.816 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.816 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:13.816 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.816 13:50:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:14.075 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.075 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:14.075 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.075 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:14.332 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.332 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:14.332 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.332 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:14.590 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:14.590 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:14.590 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:14.848 13:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:15.105 13:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:16.037 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:16.037 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:16.037 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.037 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:16.294 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:16.294 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:16.294 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.294 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:16.551 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:16.551 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:16.551 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.551 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:16.807 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.807 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:16.807 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.807 13:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:17.064 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.064 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:17.064 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.064 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:17.322 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:17.322 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:17.322 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.322 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:17.579 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:17.579 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:17.579 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:17.836 13:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:18.093 13:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:19.021 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:19.021 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:19.021 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.021 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:19.278 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:19.278 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:19.278 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.278 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:19.534 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:19.534 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:19.534 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.534 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:19.791 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:19.791 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:19.791 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.791 13:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:20.048 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.048 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:20.048 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.048 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:20.303 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:20.303 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:20.303 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.303 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:20.560 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.560 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:20.818 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:20.818 13:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:21.075 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:21.332 13:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:22.704 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:22.704 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:22.704 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.704 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:22.704 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.704 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:22.704 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.704 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:22.961 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.961 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:22.961 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.962 13:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:23.219 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.219 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:23.219 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:23.219 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:23.477 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.477 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:23.477 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:23.477 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:23.734 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.734 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:23.734 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:23.734 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:23.991 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.991 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:23.991 13:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:24.249 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:24.506 13:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:25.438 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:25.438 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:25.438 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.438 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:25.696 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:25.696 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:25.696 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.696 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:25.954 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.954 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:25.954 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.954 13:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:26.212 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.212 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:26.212 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.212 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:26.470 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.470 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:26.470 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.470 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:26.728 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.728 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:26.728 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.728 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:26.985 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.985 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:26.985 13:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:27.242 13:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:27.500 13:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:28.871 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:28.871 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:28.871 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.871 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:28.871 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.871 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:28.872 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.872 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:29.129 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.129 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:29.129 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.129 13:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:29.387 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.387 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:29.387 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.387 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:29.648 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.648 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:29.648 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.648 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:29.905 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.905 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:29.905 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.905 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:30.163 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.163 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:30.163 13:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:30.452 13:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:30.710 13:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:31.644 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:31.644 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:31.644 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.644 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:31.902 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.902 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:31.902 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.902 13:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:32.161 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:32.161 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:32.161 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.161 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:32.419 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.419 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:32.419 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.419 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:32.677 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.677 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:32.677 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.677 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:32.934 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:32.934 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:32.934 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:32.934 13:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 632199 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 632199 ']' 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 632199 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 632199 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 632199' 00:21:33.193 killing process with pid 632199 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 632199 00:21:33.193 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 632199 00:21:33.193 Connection closed with partial response: 00:21:33.193 00:21:33.193 00:21:33.455 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 632199 00:21:33.455 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:33.455 [2024-07-25 13:49:55.795771] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:33.455 [2024-07-25 13:49:55.795846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632199 ] 00:21:33.455 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.455 [2024-07-25 13:49:55.854451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.455 [2024-07-25 13:49:55.961369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.455 Running I/O for 90 seconds... 00:21:33.455 [2024-07-25 13:50:11.798305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.798381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.798462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.455 [2024-07-25 13:50:11.798483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.798506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.798522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.798544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.798560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.798581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.798597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.798617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.798633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.798654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.798669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.798690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.798705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.798726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.798741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:33.455 [2024-07-25 13:50:11.799526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.455 [2024-07-25 13:50:11.799541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.799965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.799980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.800016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.800078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.800205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.800252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.800299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.800355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.456 [2024-07-25 13:50:11.800957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.800979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.800995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:33.456 [2024-07-25 13:50:11.801017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.456 [2024-07-25 13:50:11.801033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.801970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.801991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.457 [2024-07-25 13:50:11.802795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.457 [2024-07-25 13:50:11.802854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.457 [2024-07-25 13:50:11.802907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.457 [2024-07-25 13:50:11.802960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.802991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.457 [2024-07-25 13:50:11.803012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:33.457 [2024-07-25 13:50:11.803072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.458 [2024-07-25 13:50:11.803346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.458 [2024-07-25 13:50:11.803406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.458 [2024-07-25 13:50:11.803493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.458 [2024-07-25 13:50:11.803552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.803944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.803966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.804943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.804980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.805003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.805222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.805255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.805302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.805326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:33.458 [2024-07-25 13:50:11.805380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.458 [2024-07-25 13:50:11.805404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:11.805446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:11.805470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:11.805511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:11.805534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:11.805574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:11.805597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:11.805638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:11.805661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:11.805702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:11.805725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.507619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:27.507665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.507741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:27.507762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.507784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:27.507800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.507821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.507837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.507859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:27.507882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.507905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:27.507920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.507942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.459 [2024-07-25 13:50:27.507957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.507993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.508714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.508729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.510225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.510252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.510279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.510297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.510320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.510337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.510358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.510389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.510413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.510429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:33.459 [2024-07-25 13:50:27.510455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.459 [2024-07-25 13:50:27.510487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.510972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.510988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.511026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.511041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.511097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.511115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.511137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.511153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.511175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.511191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.511213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.511229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.511251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.511266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.511288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.511304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:33.460 [2024-07-25 13:50:27.511326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.460 [2024-07-25 13:50:27.511342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:33.461 [2024-07-25 13:50:27.511944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:33.461 [2024-07-25 13:50:27.511960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:33.461 Received shutdown signal, test time was about 32.617047 seconds 00:21:33.461 00:21:33.461 Latency(us) 00:21:33.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.461 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:33.461 Verification LBA range: start 0x0 length 0x4000 00:21:33.461 Nvme0n1 : 32.62 8214.56 32.09 0.00 0.00 15556.22 227.56 4026531.84 00:21:33.461 =================================================================================================================== 00:21:33.461 Total : 8214.56 32.09 0.00 0.00 15556.22 227.56 4026531.84 00:21:33.461 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.720 rmmod nvme_tcp 00:21:33.720 rmmod nvme_fabrics 00:21:33.720 rmmod nvme_keyring 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 631916 ']' 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 631916 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 631916 ']' 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 631916 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 631916 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 631916' 00:21:33.720 killing process with pid 631916 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 631916 00:21:33.720 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 631916 00:21:33.978 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.978 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.978 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.978 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.978 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.978 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.978 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.978 13:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:36.514 00:21:36.514 real 0m41.383s 00:21:36.514 user 2m2.637s 00:21:36.514 sys 0m11.660s 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:36.514 ************************************ 00:21:36.514 END TEST nvmf_host_multipath_status 00:21:36.514 ************************************ 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.514 ************************************ 00:21:36.514 START TEST nvmf_discovery_remove_ifc 00:21:36.514 ************************************ 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:36.514 * Looking for test storage... 00:21:36.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.514 13:50:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.414 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.414 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:38.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:38.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:38.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:38.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.415 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:38.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:21:38.415 00:21:38.415 --- 10.0.0.2 ping statistics --- 00:21:38.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.416 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:21:38.416 00:21:38.416 --- 10.0.0.1 ping statistics --- 00:21:38.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.416 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=638403 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 638403 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 638403 ']' 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.416 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.416 [2024-07-25 13:50:35.379201] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:38.416 [2024-07-25 13:50:35.379272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.416 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.416 [2024-07-25 13:50:35.440675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.675 [2024-07-25 13:50:35.550583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.675 [2024-07-25 13:50:35.550629] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.675 [2024-07-25 13:50:35.550658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.675 [2024-07-25 13:50:35.550669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.675 [2024-07-25 13:50:35.550679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.675 [2024-07-25 13:50:35.550711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.675 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.675 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:21:38.675 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.675 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:38.675 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.675 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.675 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:38.675 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.675 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.675 [2024-07-25 13:50:35.700755] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.675 [2024-07-25 13:50:35.709020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:38.977 null0 00:21:38.977 [2024-07-25 13:50:35.740880] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.977 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.978 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=638437 00:21:38.978 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:38.978 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 638437 /tmp/host.sock 00:21:38.978 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 638437 ']' 00:21:38.978 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:21:38.978 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.978 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:38.978 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:38.978 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.978 13:50:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.978 [2024-07-25 13:50:35.803286] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:38.978 [2024-07-25 13:50:35.803369] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638437 ] 00:21:38.978 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.978 [2024-07-25 13:50:35.859748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.978 [2024-07-25 13:50:35.964049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.978 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.978 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:21:38.978 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.978 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:38.978 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.978 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.978 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.978 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:38.978 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.235 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:39.235 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.235 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:39.235 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.235 13:50:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:40.166 [2024-07-25 13:50:37.168231] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:40.166 [2024-07-25 13:50:37.168272] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:40.166 [2024-07-25 13:50:37.168296] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:40.424 [2024-07-25 13:50:37.256567] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:40.424 [2024-07-25 13:50:37.360195] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:40.424 [2024-07-25 13:50:37.360264] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:40.424 [2024-07-25 13:50:37.360311] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:40.424 [2024-07-25 13:50:37.360334] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:40.424 [2024-07-25 13:50:37.360383] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:40.424 [2024-07-25 13:50:37.366546] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23a98e0 was disconnected and freed. delete nvme_qpair. 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.424 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:40.425 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:40.425 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:40.682 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.682 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:40.682 13:50:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:41.615 13:50:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:42.547 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:42.547 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.547 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:42.547 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.547 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:42.547 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:42.547 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:42.547 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.804 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:42.804 13:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:43.734 13:50:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:44.665 13:50:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:46.036 13:50:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:46.036 [2024-07-25 13:50:42.801621] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:46.036 [2024-07-25 13:50:42.801695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.036 [2024-07-25 13:50:42.801715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.036 [2024-07-25 13:50:42.801731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.036 [2024-07-25 13:50:42.801744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.036 [2024-07-25 13:50:42.801756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.036 [2024-07-25 13:50:42.801768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.036 [2024-07-25 13:50:42.801781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.036 [2024-07-25 13:50:42.801794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.036 [2024-07-25 13:50:42.801807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.036 [2024-07-25 13:50:42.801820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.036 [2024-07-25 13:50:42.801839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370320 is same with the state(5) to be set 00:21:46.036 [2024-07-25 13:50:42.811641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370320 (9): Bad file descriptor 00:21:46.036 [2024-07-25 13:50:42.821682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:46.966 [2024-07-25 13:50:43.853088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:46.966 [2024-07-25 13:50:43.853133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2370320 with addr=10.0.0.2, port=4420 00:21:46.966 [2024-07-25 13:50:43.853154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370320 is same with the state(5) to be set 00:21:46.966 [2024-07-25 13:50:43.853185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370320 (9): Bad file descriptor 00:21:46.966 [2024-07-25 13:50:43.853640] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.966 [2024-07-25 13:50:43.853682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.966 [2024-07-25 13:50:43.853701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.966 [2024-07-25 13:50:43.853719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.966 [2024-07-25 13:50:43.853744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.966 [2024-07-25 13:50:43.853760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:46.966 13:50:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:47.900 [2024-07-25 13:50:44.856268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:47.900 [2024-07-25 13:50:44.856330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:47.900 [2024-07-25 13:50:44.856345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:47.900 [2024-07-25 13:50:44.856375] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:21:47.900 [2024-07-25 13:50:44.856427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.900 [2024-07-25 13:50:44.856478] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:47.900 [2024-07-25 13:50:44.856549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.900 [2024-07-25 13:50:44.856571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.900 [2024-07-25 13:50:44.856597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.900 [2024-07-25 13:50:44.856611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.900 [2024-07-25 13:50:44.856624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.900 [2024-07-25 13:50:44.856638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.900 [2024-07-25 13:50:44.856650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.900 [2024-07-25 13:50:44.856664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.900 [2024-07-25 13:50:44.856677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.900 [2024-07-25 13:50:44.856689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.900 [2024-07-25 13:50:44.856702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:47.900 [2024-07-25 13:50:44.856758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236f780 (9): Bad file descriptor 00:21:47.900 [2024-07-25 13:50:44.857745] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:47.900 [2024-07-25 13:50:44.857767] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.900 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.157 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:48.157 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:48.157 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.157 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:48.157 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.157 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:48.157 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:48.157 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:48.157 13:50:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.157 13:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:48.157 13:50:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:49.087 13:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:50.086 [2024-07-25 13:50:46.914634] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:50.086 [2024-07-25 13:50:46.914656] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:50.086 [2024-07-25 13:50:46.914679] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:50.086 [2024-07-25 13:50:47.044115] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:50.086 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:50.086 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.086 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:50.086 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.086 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:50.086 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:50.086 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:50.086 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.355 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:50.355 13:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:50.355 [2024-07-25 13:50:47.228263] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:50.355 [2024-07-25 13:50:47.228316] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:50.355 [2024-07-25 13:50:47.228353] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:50.355 [2024-07-25 13:50:47.228389] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:50.355 [2024-07-25 13:50:47.228401] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:50.355 [2024-07-25 13:50:47.232776] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23930d0 was disconnected and freed. delete nvme_qpair. 00:21:51.288 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 638437 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 638437 ']' 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 638437 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 638437 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 638437' 00:21:51.289 killing process with pid 638437 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 638437 00:21:51.289 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 638437 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:51.547 rmmod nvme_tcp 00:21:51.547 rmmod nvme_fabrics 00:21:51.547 rmmod nvme_keyring 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 638403 ']' 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 638403 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 638403 ']' 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 638403 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 638403 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 638403' 00:21:51.547 killing process with pid 638403 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 638403 00:21:51.547 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 638403 00:21:51.806 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:51.806 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.806 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.806 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.806 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.806 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.806 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.806 13:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.343 00:21:54.343 real 0m17.715s 00:21:54.343 user 0m25.644s 00:21:54.343 sys 0m3.009s 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:54.343 ************************************ 00:21:54.343 END TEST nvmf_discovery_remove_ifc 00:21:54.343 ************************************ 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.343 ************************************ 00:21:54.343 START TEST nvmf_identify_kernel_target 00:21:54.343 ************************************ 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:54.343 * Looking for test storage... 00:21:54.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.343 13:50:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:56.246 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:56.246 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:56.246 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:56.246 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.246 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.247 13:50:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:21:56.247 00:21:56.247 --- 10.0.0.2 ping statistics --- 00:21:56.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.247 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:21:56.247 00:21:56.247 --- 10.0.0.1 ping statistics --- 00:21:56.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.247 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:56.247 13:50:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:21:57.620 Waiting for block devices as requested 00:21:57.620 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:21:57.620 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:21:57.620 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:21:57.877 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:21:57.877 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:21:57.877 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:21:58.136 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:21:58.136 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:21:58.136 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:21:58.136 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:21:58.393 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:21:58.393 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:21:58.393 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:21:58.651 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:21:58.651 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:21:58.651 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:21:58.651 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:58.908 No valid GPT data, bailing 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:58.908 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:21:59.166 00:21:59.166 Discovery Log Number of Records 2, Generation counter 2 00:21:59.166 =====Discovery Log Entry 0====== 00:21:59.166 trtype: tcp 00:21:59.166 adrfam: ipv4 00:21:59.166 subtype: current discovery subsystem 00:21:59.166 treq: not specified, sq flow control disable supported 00:21:59.166 portid: 1 00:21:59.166 trsvcid: 4420 00:21:59.166 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:59.166 traddr: 10.0.0.1 00:21:59.166 eflags: none 00:21:59.166 sectype: none 00:21:59.166 =====Discovery Log Entry 1====== 00:21:59.166 trtype: tcp 00:21:59.166 adrfam: ipv4 00:21:59.166 subtype: nvme subsystem 00:21:59.166 treq: not specified, sq flow control disable supported 00:21:59.166 portid: 1 00:21:59.166 trsvcid: 4420 00:21:59.166 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:59.166 traddr: 10.0.0.1 00:21:59.166 eflags: none 00:21:59.166 sectype: none 00:21:59.166 13:50:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:59.166 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:59.166 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.166 ===================================================== 00:21:59.167 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:59.167 ===================================================== 00:21:59.167 Controller Capabilities/Features 00:21:59.167 ================================ 00:21:59.167 Vendor ID: 0000 00:21:59.167 Subsystem Vendor ID: 0000 00:21:59.167 Serial Number: ceb734aab1ebbece6708 00:21:59.167 Model Number: Linux 00:21:59.167 Firmware Version: 6.7.0-68 00:21:59.167 Recommended Arb Burst: 0 00:21:59.167 IEEE OUI Identifier: 00 00 00 00:21:59.167 Multi-path I/O 00:21:59.167 May have multiple subsystem ports: No 00:21:59.167 May have multiple controllers: No 00:21:59.167 Associated with SR-IOV VF: No 00:21:59.167 Max Data Transfer Size: Unlimited 00:21:59.167 Max Number of Namespaces: 0 00:21:59.167 Max Number of I/O Queues: 1024 00:21:59.167 NVMe Specification Version (VS): 1.3 00:21:59.167 NVMe Specification Version (Identify): 1.3 00:21:59.167 Maximum Queue Entries: 1024 00:21:59.167 Contiguous Queues Required: No 00:21:59.167 Arbitration Mechanisms Supported 00:21:59.167 Weighted Round Robin: Not Supported 00:21:59.167 Vendor Specific: Not Supported 00:21:59.167 Reset Timeout: 7500 ms 00:21:59.167 Doorbell Stride: 4 bytes 00:21:59.167 NVM Subsystem Reset: Not Supported 00:21:59.167 Command Sets Supported 00:21:59.167 NVM Command Set: Supported 00:21:59.167 Boot Partition: Not Supported 00:21:59.167 Memory Page Size Minimum: 4096 bytes 00:21:59.167 Memory Page Size Maximum: 4096 bytes 00:21:59.167 Persistent Memory Region: Not Supported 00:21:59.167 Optional Asynchronous Events Supported 00:21:59.167 Namespace Attribute Notices: Not Supported 00:21:59.167 Firmware Activation Notices: Not Supported 00:21:59.167 ANA Change Notices: Not Supported 00:21:59.167 PLE Aggregate Log Change Notices: Not Supported 00:21:59.167 LBA Status Info Alert Notices: Not Supported 00:21:59.167 EGE Aggregate Log Change Notices: Not Supported 00:21:59.167 Normal NVM Subsystem Shutdown event: Not Supported 00:21:59.167 Zone Descriptor Change Notices: Not Supported 00:21:59.167 Discovery Log Change Notices: Supported 00:21:59.167 Controller Attributes 00:21:59.167 128-bit Host Identifier: Not Supported 00:21:59.167 Non-Operational Permissive Mode: Not Supported 00:21:59.167 NVM Sets: Not Supported 00:21:59.167 Read Recovery Levels: Not Supported 00:21:59.167 Endurance Groups: Not Supported 00:21:59.167 Predictable Latency Mode: Not Supported 00:21:59.167 Traffic Based Keep ALive: Not Supported 00:21:59.167 Namespace Granularity: Not Supported 00:21:59.167 SQ Associations: Not Supported 00:21:59.167 UUID List: Not Supported 00:21:59.167 Multi-Domain Subsystem: Not Supported 00:21:59.167 Fixed Capacity Management: Not Supported 00:21:59.167 Variable Capacity Management: Not Supported 00:21:59.167 Delete Endurance Group: Not Supported 00:21:59.167 Delete NVM Set: Not Supported 00:21:59.167 Extended LBA Formats Supported: Not Supported 00:21:59.167 Flexible Data Placement Supported: Not Supported 00:21:59.167 00:21:59.167 Controller Memory Buffer Support 00:21:59.167 ================================ 00:21:59.167 Supported: No 00:21:59.167 00:21:59.167 Persistent Memory Region Support 00:21:59.167 ================================ 00:21:59.167 Supported: No 00:21:59.167 00:21:59.167 Admin Command Set Attributes 00:21:59.167 ============================ 00:21:59.167 Security Send/Receive: Not Supported 00:21:59.167 Format NVM: Not Supported 00:21:59.167 Firmware Activate/Download: Not Supported 00:21:59.167 Namespace Management: Not Supported 00:21:59.167 Device Self-Test: Not Supported 00:21:59.167 Directives: Not Supported 00:21:59.167 NVMe-MI: Not Supported 00:21:59.167 Virtualization Management: Not Supported 00:21:59.167 Doorbell Buffer Config: Not Supported 00:21:59.167 Get LBA Status Capability: Not Supported 00:21:59.167 Command & Feature Lockdown Capability: Not Supported 00:21:59.167 Abort Command Limit: 1 00:21:59.167 Async Event Request Limit: 1 00:21:59.167 Number of Firmware Slots: N/A 00:21:59.167 Firmware Slot 1 Read-Only: N/A 00:21:59.167 Firmware Activation Without Reset: N/A 00:21:59.167 Multiple Update Detection Support: N/A 00:21:59.167 Firmware Update Granularity: No Information Provided 00:21:59.167 Per-Namespace SMART Log: No 00:21:59.167 Asymmetric Namespace Access Log Page: Not Supported 00:21:59.167 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:59.167 Command Effects Log Page: Not Supported 00:21:59.167 Get Log Page Extended Data: Supported 00:21:59.167 Telemetry Log Pages: Not Supported 00:21:59.167 Persistent Event Log Pages: Not Supported 00:21:59.167 Supported Log Pages Log Page: May Support 00:21:59.167 Commands Supported & Effects Log Page: Not Supported 00:21:59.167 Feature Identifiers & Effects Log Page:May Support 00:21:59.167 NVMe-MI Commands & Effects Log Page: May Support 00:21:59.167 Data Area 4 for Telemetry Log: Not Supported 00:21:59.167 Error Log Page Entries Supported: 1 00:21:59.167 Keep Alive: Not Supported 00:21:59.167 00:21:59.167 NVM Command Set Attributes 00:21:59.167 ========================== 00:21:59.167 Submission Queue Entry Size 00:21:59.167 Max: 1 00:21:59.167 Min: 1 00:21:59.167 Completion Queue Entry Size 00:21:59.167 Max: 1 00:21:59.167 Min: 1 00:21:59.167 Number of Namespaces: 0 00:21:59.167 Compare Command: Not Supported 00:21:59.167 Write Uncorrectable Command: Not Supported 00:21:59.167 Dataset Management Command: Not Supported 00:21:59.167 Write Zeroes Command: Not Supported 00:21:59.167 Set Features Save Field: Not Supported 00:21:59.167 Reservations: Not Supported 00:21:59.167 Timestamp: Not Supported 00:21:59.167 Copy: Not Supported 00:21:59.167 Volatile Write Cache: Not Present 00:21:59.167 Atomic Write Unit (Normal): 1 00:21:59.167 Atomic Write Unit (PFail): 1 00:21:59.167 Atomic Compare & Write Unit: 1 00:21:59.167 Fused Compare & Write: Not Supported 00:21:59.167 Scatter-Gather List 00:21:59.167 SGL Command Set: Supported 00:21:59.167 SGL Keyed: Not Supported 00:21:59.167 SGL Bit Bucket Descriptor: Not Supported 00:21:59.167 SGL Metadata Pointer: Not Supported 00:21:59.167 Oversized SGL: Not Supported 00:21:59.167 SGL Metadata Address: Not Supported 00:21:59.167 SGL Offset: Supported 00:21:59.167 Transport SGL Data Block: Not Supported 00:21:59.167 Replay Protected Memory Block: Not Supported 00:21:59.167 00:21:59.167 Firmware Slot Information 00:21:59.167 ========================= 00:21:59.167 Active slot: 0 00:21:59.167 00:21:59.167 00:21:59.167 Error Log 00:21:59.167 ========= 00:21:59.167 00:21:59.167 Active Namespaces 00:21:59.167 ================= 00:21:59.167 Discovery Log Page 00:21:59.167 ================== 00:21:59.167 Generation Counter: 2 00:21:59.167 Number of Records: 2 00:21:59.167 Record Format: 0 00:21:59.167 00:21:59.167 Discovery Log Entry 0 00:21:59.167 ---------------------- 00:21:59.167 Transport Type: 3 (TCP) 00:21:59.167 Address Family: 1 (IPv4) 00:21:59.167 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:59.167 Entry Flags: 00:21:59.167 Duplicate Returned Information: 0 00:21:59.167 Explicit Persistent Connection Support for Discovery: 0 00:21:59.167 Transport Requirements: 00:21:59.167 Secure Channel: Not Specified 00:21:59.167 Port ID: 1 (0x0001) 00:21:59.167 Controller ID: 65535 (0xffff) 00:21:59.167 Admin Max SQ Size: 32 00:21:59.167 Transport Service Identifier: 4420 00:21:59.167 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:59.167 Transport Address: 10.0.0.1 00:21:59.167 Discovery Log Entry 1 00:21:59.167 ---------------------- 00:21:59.167 Transport Type: 3 (TCP) 00:21:59.167 Address Family: 1 (IPv4) 00:21:59.167 Subsystem Type: 2 (NVM Subsystem) 00:21:59.167 Entry Flags: 00:21:59.167 Duplicate Returned Information: 0 00:21:59.167 Explicit Persistent Connection Support for Discovery: 0 00:21:59.167 Transport Requirements: 00:21:59.167 Secure Channel: Not Specified 00:21:59.167 Port ID: 1 (0x0001) 00:21:59.167 Controller ID: 65535 (0xffff) 00:21:59.167 Admin Max SQ Size: 32 00:21:59.167 Transport Service Identifier: 4420 00:21:59.167 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:59.167 Transport Address: 10.0.0.1 00:21:59.167 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:59.167 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.167 get_feature(0x01) failed 00:21:59.167 get_feature(0x02) failed 00:21:59.167 get_feature(0x04) failed 00:21:59.167 ===================================================== 00:21:59.167 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:59.167 ===================================================== 00:21:59.167 Controller Capabilities/Features 00:21:59.167 ================================ 00:21:59.167 Vendor ID: 0000 00:21:59.167 Subsystem Vendor ID: 0000 00:21:59.167 Serial Number: 1b80d86c74ca6d801719 00:21:59.167 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:59.167 Firmware Version: 6.7.0-68 00:21:59.167 Recommended Arb Burst: 6 00:21:59.167 IEEE OUI Identifier: 00 00 00 00:21:59.167 Multi-path I/O 00:21:59.167 May have multiple subsystem ports: Yes 00:21:59.167 May have multiple controllers: Yes 00:21:59.167 Associated with SR-IOV VF: No 00:21:59.167 Max Data Transfer Size: Unlimited 00:21:59.167 Max Number of Namespaces: 1024 00:21:59.167 Max Number of I/O Queues: 128 00:21:59.167 NVMe Specification Version (VS): 1.3 00:21:59.167 NVMe Specification Version (Identify): 1.3 00:21:59.167 Maximum Queue Entries: 1024 00:21:59.167 Contiguous Queues Required: No 00:21:59.167 Arbitration Mechanisms Supported 00:21:59.167 Weighted Round Robin: Not Supported 00:21:59.167 Vendor Specific: Not Supported 00:21:59.167 Reset Timeout: 7500 ms 00:21:59.167 Doorbell Stride: 4 bytes 00:21:59.167 NVM Subsystem Reset: Not Supported 00:21:59.167 Command Sets Supported 00:21:59.167 NVM Command Set: Supported 00:21:59.167 Boot Partition: Not Supported 00:21:59.167 Memory Page Size Minimum: 4096 bytes 00:21:59.167 Memory Page Size Maximum: 4096 bytes 00:21:59.167 Persistent Memory Region: Not Supported 00:21:59.167 Optional Asynchronous Events Supported 00:21:59.167 Namespace Attribute Notices: Supported 00:21:59.167 Firmware Activation Notices: Not Supported 00:21:59.167 ANA Change Notices: Supported 00:21:59.167 PLE Aggregate Log Change Notices: Not Supported 00:21:59.167 LBA Status Info Alert Notices: Not Supported 00:21:59.167 EGE Aggregate Log Change Notices: Not Supported 00:21:59.167 Normal NVM Subsystem Shutdown event: Not Supported 00:21:59.167 Zone Descriptor Change Notices: Not Supported 00:21:59.167 Discovery Log Change Notices: Not Supported 00:21:59.167 Controller Attributes 00:21:59.167 128-bit Host Identifier: Supported 00:21:59.167 Non-Operational Permissive Mode: Not Supported 00:21:59.167 NVM Sets: Not Supported 00:21:59.167 Read Recovery Levels: Not Supported 00:21:59.167 Endurance Groups: Not Supported 00:21:59.167 Predictable Latency Mode: Not Supported 00:21:59.167 Traffic Based Keep ALive: Supported 00:21:59.167 Namespace Granularity: Not Supported 00:21:59.167 SQ Associations: Not Supported 00:21:59.167 UUID List: Not Supported 00:21:59.167 Multi-Domain Subsystem: Not Supported 00:21:59.167 Fixed Capacity Management: Not Supported 00:21:59.167 Variable Capacity Management: Not Supported 00:21:59.167 Delete Endurance Group: Not Supported 00:21:59.167 Delete NVM Set: Not Supported 00:21:59.167 Extended LBA Formats Supported: Not Supported 00:21:59.167 Flexible Data Placement Supported: Not Supported 00:21:59.167 00:21:59.167 Controller Memory Buffer Support 00:21:59.167 ================================ 00:21:59.167 Supported: No 00:21:59.167 00:21:59.167 Persistent Memory Region Support 00:21:59.167 ================================ 00:21:59.167 Supported: No 00:21:59.167 00:21:59.167 Admin Command Set Attributes 00:21:59.167 ============================ 00:21:59.167 Security Send/Receive: Not Supported 00:21:59.167 Format NVM: Not Supported 00:21:59.167 Firmware Activate/Download: Not Supported 00:21:59.167 Namespace Management: Not Supported 00:21:59.167 Device Self-Test: Not Supported 00:21:59.167 Directives: Not Supported 00:21:59.167 NVMe-MI: Not Supported 00:21:59.167 Virtualization Management: Not Supported 00:21:59.167 Doorbell Buffer Config: Not Supported 00:21:59.167 Get LBA Status Capability: Not Supported 00:21:59.167 Command & Feature Lockdown Capability: Not Supported 00:21:59.167 Abort Command Limit: 4 00:21:59.167 Async Event Request Limit: 4 00:21:59.167 Number of Firmware Slots: N/A 00:21:59.167 Firmware Slot 1 Read-Only: N/A 00:21:59.167 Firmware Activation Without Reset: N/A 00:21:59.167 Multiple Update Detection Support: N/A 00:21:59.167 Firmware Update Granularity: No Information Provided 00:21:59.167 Per-Namespace SMART Log: Yes 00:21:59.167 Asymmetric Namespace Access Log Page: Supported 00:21:59.167 ANA Transition Time : 10 sec 00:21:59.167 00:21:59.167 Asymmetric Namespace Access Capabilities 00:21:59.167 ANA Optimized State : Supported 00:21:59.167 ANA Non-Optimized State : Supported 00:21:59.167 ANA Inaccessible State : Supported 00:21:59.167 ANA Persistent Loss State : Supported 00:21:59.167 ANA Change State : Supported 00:21:59.167 ANAGRPID is not changed : No 00:21:59.168 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:59.168 00:21:59.168 ANA Group Identifier Maximum : 128 00:21:59.168 Number of ANA Group Identifiers : 128 00:21:59.168 Max Number of Allowed Namespaces : 1024 00:21:59.168 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:59.168 Command Effects Log Page: Supported 00:21:59.168 Get Log Page Extended Data: Supported 00:21:59.168 Telemetry Log Pages: Not Supported 00:21:59.168 Persistent Event Log Pages: Not Supported 00:21:59.168 Supported Log Pages Log Page: May Support 00:21:59.168 Commands Supported & Effects Log Page: Not Supported 00:21:59.168 Feature Identifiers & Effects Log Page:May Support 00:21:59.168 NVMe-MI Commands & Effects Log Page: May Support 00:21:59.168 Data Area 4 for Telemetry Log: Not Supported 00:21:59.168 Error Log Page Entries Supported: 128 00:21:59.168 Keep Alive: Supported 00:21:59.168 Keep Alive Granularity: 1000 ms 00:21:59.168 00:21:59.168 NVM Command Set Attributes 00:21:59.168 ========================== 00:21:59.168 Submission Queue Entry Size 00:21:59.168 Max: 64 00:21:59.168 Min: 64 00:21:59.168 Completion Queue Entry Size 00:21:59.168 Max: 16 00:21:59.168 Min: 16 00:21:59.168 Number of Namespaces: 1024 00:21:59.168 Compare Command: Not Supported 00:21:59.168 Write Uncorrectable Command: Not Supported 00:21:59.168 Dataset Management Command: Supported 00:21:59.168 Write Zeroes Command: Supported 00:21:59.168 Set Features Save Field: Not Supported 00:21:59.168 Reservations: Not Supported 00:21:59.168 Timestamp: Not Supported 00:21:59.168 Copy: Not Supported 00:21:59.168 Volatile Write Cache: Present 00:21:59.168 Atomic Write Unit (Normal): 1 00:21:59.168 Atomic Write Unit (PFail): 1 00:21:59.168 Atomic Compare & Write Unit: 1 00:21:59.168 Fused Compare & Write: Not Supported 00:21:59.168 Scatter-Gather List 00:21:59.168 SGL Command Set: Supported 00:21:59.168 SGL Keyed: Not Supported 00:21:59.168 SGL Bit Bucket Descriptor: Not Supported 00:21:59.168 SGL Metadata Pointer: Not Supported 00:21:59.168 Oversized SGL: Not Supported 00:21:59.168 SGL Metadata Address: Not Supported 00:21:59.168 SGL Offset: Supported 00:21:59.168 Transport SGL Data Block: Not Supported 00:21:59.168 Replay Protected Memory Block: Not Supported 00:21:59.168 00:21:59.168 Firmware Slot Information 00:21:59.168 ========================= 00:21:59.168 Active slot: 0 00:21:59.168 00:21:59.168 Asymmetric Namespace Access 00:21:59.168 =========================== 00:21:59.168 Change Count : 0 00:21:59.168 Number of ANA Group Descriptors : 1 00:21:59.168 ANA Group Descriptor : 0 00:21:59.168 ANA Group ID : 1 00:21:59.168 Number of NSID Values : 1 00:21:59.168 Change Count : 0 00:21:59.168 ANA State : 1 00:21:59.168 Namespace Identifier : 1 00:21:59.168 00:21:59.168 Commands Supported and Effects 00:21:59.168 ============================== 00:21:59.168 Admin Commands 00:21:59.168 -------------- 00:21:59.168 Get Log Page (02h): Supported 00:21:59.168 Identify (06h): Supported 00:21:59.168 Abort (08h): Supported 00:21:59.168 Set Features (09h): Supported 00:21:59.168 Get Features (0Ah): Supported 00:21:59.168 Asynchronous Event Request (0Ch): Supported 00:21:59.168 Keep Alive (18h): Supported 00:21:59.168 I/O Commands 00:21:59.168 ------------ 00:21:59.168 Flush (00h): Supported 00:21:59.168 Write (01h): Supported LBA-Change 00:21:59.168 Read (02h): Supported 00:21:59.168 Write Zeroes (08h): Supported LBA-Change 00:21:59.168 Dataset Management (09h): Supported 00:21:59.168 00:21:59.168 Error Log 00:21:59.168 ========= 00:21:59.168 Entry: 0 00:21:59.168 Error Count: 0x3 00:21:59.168 Submission Queue Id: 0x0 00:21:59.168 Command Id: 0x5 00:21:59.168 Phase Bit: 0 00:21:59.168 Status Code: 0x2 00:21:59.168 Status Code Type: 0x0 00:21:59.168 Do Not Retry: 1 00:21:59.168 Error Location: 0x28 00:21:59.168 LBA: 0x0 00:21:59.168 Namespace: 0x0 00:21:59.168 Vendor Log Page: 0x0 00:21:59.168 ----------- 00:21:59.168 Entry: 1 00:21:59.168 Error Count: 0x2 00:21:59.168 Submission Queue Id: 0x0 00:21:59.168 Command Id: 0x5 00:21:59.168 Phase Bit: 0 00:21:59.168 Status Code: 0x2 00:21:59.168 Status Code Type: 0x0 00:21:59.168 Do Not Retry: 1 00:21:59.168 Error Location: 0x28 00:21:59.168 LBA: 0x0 00:21:59.168 Namespace: 0x0 00:21:59.168 Vendor Log Page: 0x0 00:21:59.168 ----------- 00:21:59.168 Entry: 2 00:21:59.168 Error Count: 0x1 00:21:59.168 Submission Queue Id: 0x0 00:21:59.168 Command Id: 0x4 00:21:59.168 Phase Bit: 0 00:21:59.168 Status Code: 0x2 00:21:59.168 Status Code Type: 0x0 00:21:59.168 Do Not Retry: 1 00:21:59.168 Error Location: 0x28 00:21:59.168 LBA: 0x0 00:21:59.168 Namespace: 0x0 00:21:59.168 Vendor Log Page: 0x0 00:21:59.168 00:21:59.168 Number of Queues 00:21:59.168 ================ 00:21:59.168 Number of I/O Submission Queues: 128 00:21:59.168 Number of I/O Completion Queues: 128 00:21:59.168 00:21:59.168 ZNS Specific Controller Data 00:21:59.168 ============================ 00:21:59.168 Zone Append Size Limit: 0 00:21:59.168 00:21:59.168 00:21:59.168 Active Namespaces 00:21:59.168 ================= 00:21:59.168 get_feature(0x05) failed 00:21:59.168 Namespace ID:1 00:21:59.168 Command Set Identifier: NVM (00h) 00:21:59.168 Deallocate: Supported 00:21:59.168 Deallocated/Unwritten Error: Not Supported 00:21:59.168 Deallocated Read Value: Unknown 00:21:59.168 Deallocate in Write Zeroes: Not Supported 00:21:59.168 Deallocated Guard Field: 0xFFFF 00:21:59.168 Flush: Supported 00:21:59.168 Reservation: Not Supported 00:21:59.168 Namespace Sharing Capabilities: Multiple Controllers 00:21:59.168 Size (in LBAs): 1953525168 (931GiB) 00:21:59.168 Capacity (in LBAs): 1953525168 (931GiB) 00:21:59.168 Utilization (in LBAs): 1953525168 (931GiB) 00:21:59.168 UUID: 851dd10c-851f-4eae-899e-467aa8ed8354 00:21:59.168 Thin Provisioning: Not Supported 00:21:59.168 Per-NS Atomic Units: Yes 00:21:59.168 Atomic Boundary Size (Normal): 0 00:21:59.168 Atomic Boundary Size (PFail): 0 00:21:59.168 Atomic Boundary Offset: 0 00:21:59.168 NGUID/EUI64 Never Reused: No 00:21:59.168 ANA group ID: 1 00:21:59.168 Namespace Write Protected: No 00:21:59.168 Number of LBA Formats: 1 00:21:59.168 Current LBA Format: LBA Format #00 00:21:59.168 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:59.168 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.168 rmmod nvme_tcp 00:21:59.168 rmmod nvme_fabrics 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.168 13:50:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:01.732 13:50:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:02.665 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:02.665 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:02.665 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:02.665 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:02.665 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:02.665 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:02.666 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:02.666 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:02.666 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:02.666 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:02.666 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:02.666 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:02.666 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:02.666 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:02.666 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:02.666 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:03.603 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:03.861 00:22:03.861 real 0m9.819s 00:22:03.861 user 0m2.124s 00:22:03.861 sys 0m3.612s 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.861 ************************************ 00:22:03.861 END TEST nvmf_identify_kernel_target 00:22:03.861 ************************************ 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.861 ************************************ 00:22:03.861 START TEST nvmf_auth_host 00:22:03.861 ************************************ 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:03.861 * Looking for test storage... 00:22:03.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:03.861 13:51:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:06.390 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:06.390 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:06.390 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:06.390 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.390 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:22:06.391 00:22:06.391 --- 10.0.0.2 ping statistics --- 00:22:06.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.391 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:22:06.391 00:22:06.391 --- 10.0.0.1 ping statistics --- 00:22:06.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.391 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=645742 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 645742 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 645742 ']' 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.391 13:51:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8ab08604eba8dbc88457d814b2897f1b 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kEv 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8ab08604eba8dbc88457d814b2897f1b 0 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8ab08604eba8dbc88457d814b2897f1b 0 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8ab08604eba8dbc88457d814b2897f1b 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kEv 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kEv 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kEv 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c5f8f0b5027f25b7a607864b4b7fa76718dea439035d7c4423a0bcdf27f07fba 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.QL1 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c5f8f0b5027f25b7a607864b4b7fa76718dea439035d7c4423a0bcdf27f07fba 3 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c5f8f0b5027f25b7a607864b4b7fa76718dea439035d7c4423a0bcdf27f07fba 3 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c5f8f0b5027f25b7a607864b4b7fa76718dea439035d7c4423a0bcdf27f07fba 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:06.391 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.QL1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.QL1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.QL1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=08d275d11559709331bde91c75a0b37af86392dc09f27da9 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wcW 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 08d275d11559709331bde91c75a0b37af86392dc09f27da9 0 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 08d275d11559709331bde91c75a0b37af86392dc09f27da9 0 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=08d275d11559709331bde91c75a0b37af86392dc09f27da9 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wcW 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wcW 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wcW 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb94db20b82d327b70924a2fcad7fccd24cdbabb932646ed 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.cGb 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb94db20b82d327b70924a2fcad7fccd24cdbabb932646ed 2 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb94db20b82d327b70924a2fcad7fccd24cdbabb932646ed 2 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb94db20b82d327b70924a2fcad7fccd24cdbabb932646ed 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.cGb 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.cGb 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.cGb 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=771c6b44bd8f432d2e0fc9e577aeeea6 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.30z 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 771c6b44bd8f432d2e0fc9e577aeeea6 1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 771c6b44bd8f432d2e0fc9e577aeeea6 1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=771c6b44bd8f432d2e0fc9e577aeeea6 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.30z 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.30z 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.30z 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=88727995db3f5b48cec1147774c87a0d 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5E1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 88727995db3f5b48cec1147774c87a0d 1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 88727995db3f5b48cec1147774c87a0d 1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=88727995db3f5b48cec1147774c87a0d 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5E1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5E1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.5E1 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:06.649 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6ac1863af0b7eee7b5fa58a611c112e3b237d2eb02685aa6 00:22:06.650 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:06.650 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5wG 00:22:06.650 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6ac1863af0b7eee7b5fa58a611c112e3b237d2eb02685aa6 2 00:22:06.650 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6ac1863af0b7eee7b5fa58a611c112e3b237d2eb02685aa6 2 00:22:06.650 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.650 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:06.650 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6ac1863af0b7eee7b5fa58a611c112e3b237d2eb02685aa6 00:22:06.650 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:06.650 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:06.906 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5wG 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5wG 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5wG 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ba1c7d39f2e9a596912559b103bc7e9e 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gvf 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ba1c7d39f2e9a596912559b103bc7e9e 0 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ba1c7d39f2e9a596912559b103bc7e9e 0 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ba1c7d39f2e9a596912559b103bc7e9e 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gvf 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gvf 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gvf 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0c037b52b2b3255bc50b0cacc01e610f1e7185da87a311ffb92ed6b317d522b9 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KgG 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0c037b52b2b3255bc50b0cacc01e610f1e7185da87a311ffb92ed6b317d522b9 3 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0c037b52b2b3255bc50b0cacc01e610f1e7185da87a311ffb92ed6b317d522b9 3 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0c037b52b2b3255bc50b0cacc01e610f1e7185da87a311ffb92ed6b317d522b9 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KgG 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KgG 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KgG 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 645742 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 645742 ']' 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.907 13:51:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kEv 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.QL1 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QL1 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wcW 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.cGb ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cGb 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.30z 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.5E1 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5E1 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5wG 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gvf ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gvf 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KgG 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:07.165 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:07.424 13:51:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:08.358 Waiting for block devices as requested 00:22:08.358 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:08.358 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:08.616 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:08.616 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:08.616 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:08.873 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:08.873 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:08.873 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:08.873 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:09.130 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:09.130 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:09.130 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:09.130 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:09.388 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:09.388 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:09.388 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:09.646 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:09.903 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:09.903 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:09.903 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:09.903 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:09.903 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:09.903 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:09.903 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:09.903 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:09.903 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:09.903 No valid GPT data, bailing 00:22:10.161 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:10.161 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:10.161 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:10.161 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:10.161 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:10.161 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:10.161 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:10.161 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:10.161 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:10.162 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:10.162 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:10.162 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:10.162 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:10.162 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:10.162 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:10.162 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:10.162 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:10.162 13:51:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:10.162 00:22:10.162 Discovery Log Number of Records 2, Generation counter 2 00:22:10.162 =====Discovery Log Entry 0====== 00:22:10.162 trtype: tcp 00:22:10.162 adrfam: ipv4 00:22:10.162 subtype: current discovery subsystem 00:22:10.162 treq: not specified, sq flow control disable supported 00:22:10.162 portid: 1 00:22:10.162 trsvcid: 4420 00:22:10.162 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:10.162 traddr: 10.0.0.1 00:22:10.162 eflags: none 00:22:10.162 sectype: none 00:22:10.162 =====Discovery Log Entry 1====== 00:22:10.162 trtype: tcp 00:22:10.162 adrfam: ipv4 00:22:10.162 subtype: nvme subsystem 00:22:10.162 treq: not specified, sq flow control disable supported 00:22:10.162 portid: 1 00:22:10.162 trsvcid: 4420 00:22:10.162 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:10.162 traddr: 10.0.0.1 00:22:10.162 eflags: none 00:22:10.162 sectype: none 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.162 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 nvme0n1 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 nvme0n1 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:10.705 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.706 nvme0n1 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.706 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.969 nvme0n1 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.969 13:51:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.230 nvme0n1 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.230 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.489 nvme0n1 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.489 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.749 nvme0n1 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.749 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.008 nvme0n1 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.008 13:51:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.267 nvme0n1 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.267 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.527 nvme0n1 00:22:12.527 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.527 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.527 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.527 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.527 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.527 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.527 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.528 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.789 nvme0n1 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.789 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.048 nvme0n1 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.048 13:51:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.307 nvme0n1 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.307 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.567 nvme0n1 00:22:13.567 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.567 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.567 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.567 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.567 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.567 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.826 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.087 nvme0n1 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.087 13:51:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.348 nvme0n1 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.348 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.917 nvme0n1 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:14.917 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.918 13:51:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.488 nvme0n1 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.488 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.057 nvme0n1 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.057 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.058 13:51:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.627 nvme0n1 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.627 13:51:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.196 nvme0n1 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.196 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.135 nvme0n1 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.135 13:51:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.135 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.071 nvme0n1 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.071 13:51:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.008 nvme0n1 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.008 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.009 13:51:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 nvme0n1 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:20.945 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:20.946 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:20.946 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:20.946 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.946 13:51:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.884 nvme0n1 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.884 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.885 nvme0n1 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.885 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.143 13:51:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.143 nvme0n1 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.143 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.144 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.402 nvme0n1 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.402 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.660 nvme0n1 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.660 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.661 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.919 nvme0n1 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.919 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.178 nvme0n1 00:22:23.178 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.178 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.178 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.178 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.178 13:51:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.178 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.436 nvme0n1 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.436 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.695 nvme0n1 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.695 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.954 nvme0n1 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.954 13:51:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.213 nvme0n1 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.213 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.472 nvme0n1 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.472 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.731 nvme0n1 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.731 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.989 13:51:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.248 nvme0n1 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.248 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.506 nvme0n1 00:22:25.506 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.507 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.765 nvme0n1 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.765 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.023 13:51:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.281 nvme0n1 00:22:26.281 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.281 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.281 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.281 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.281 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.281 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.545 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.809 nvme0n1 00:22:26.809 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.809 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.809 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.809 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:26.809 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.809 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.067 13:51:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.633 nvme0n1 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:27.633 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:27.634 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.634 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.199 nvme0n1 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.199 13:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.199 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.457 nvme0n1 00:22:28.457 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:28.715 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:28.716 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.716 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.716 13:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.648 nvme0n1 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:29.648 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.649 13:51:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.582 nvme0n1 00:22:30.582 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.582 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.582 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:30.582 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.582 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.583 13:51:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.517 nvme0n1 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.517 13:51:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.083 nvme0n1 00:22:32.083 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.083 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.083 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.083 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.083 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:32.083 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.341 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.275 nvme0n1 00:22:33.275 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.275 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.275 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.275 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.275 13:51:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.275 nvme0n1 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:33.275 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.276 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.534 nvme0n1 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.534 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.535 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:33.535 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.535 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.793 nvme0n1 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.793 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.794 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.052 nvme0n1 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:34.052 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.053 13:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.311 nvme0n1 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.311 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.312 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.571 nvme0n1 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.571 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.830 nvme0n1 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.830 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.086 nvme0n1 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:35.086 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.087 13:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.344 nvme0n1 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.344 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.602 nvme0n1 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:35.602 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.603 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.860 nvme0n1 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:35.860 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.861 13:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.118 nvme0n1 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.118 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.376 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.634 nvme0n1 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.634 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.635 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.893 nvme0n1 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.893 13:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.151 nvme0n1 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.151 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.717 nvme0n1 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:37.717 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.718 13:51:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.283 nvme0n1 00:22:38.283 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.283 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.283 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.283 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.283 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.283 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.283 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.284 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.541 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.541 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:38.541 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:38.541 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.541 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.542 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.799 nvme0n1 00:22:38.799 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.799 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.799 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.799 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.799 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:38.799 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.055 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.056 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:39.056 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.056 13:51:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.313 nvme0n1 00:22:39.313 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.571 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:39.572 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.572 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:39.572 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:39.572 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:39.572 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:39.572 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.572 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.136 nvme0n1 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGFiMDg2MDRlYmE4ZGJjODg0NTdkODE0YjI4OTdmMWLR9+Tp: 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: ]] 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzVmOGYwYjUwMjdmMjViN2E2MDc4NjRiNGI3ZmE3NjcxOGRlYTQzOTAzNWQ3YzQ0MjNhMGJjZGYyN2YwN2ZiYdjdvqM=: 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:40.136 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.137 13:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.068 nvme0n1 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:41.068 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.069 13:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.003 nvme0n1 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzcxYzZiNDRiZDhmNDMyZDJlMGZjOWU1NzdhZWVlYTYyTA+b: 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: ]] 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODg3Mjc5OTVkYjNmNWI0OGNlYzExNDc3NzRjODdhMGRHAB/p: 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.003 13:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.569 nvme0n1 00:22:42.569 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.569 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.569 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.569 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.569 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmFjMTg2M2FmMGI3ZWVlN2I1ZmE1OGE2MTFjMTEyZTNiMjM3ZDJlYjAyNjg1YWE2c0WkOw==: 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: ]] 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmExYzdkMzlmMmU5YTU5NjkxMjU1OWIxMDNiYzdlOWUePfgw: 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.827 13:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.761 nvme0n1 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMwMzdiNTJiMmIzMjU1YmM1MGIwY2FjYzAxZTYxMGYxZTcxODVkYTg3YTMxMWZmYjkyZWQ2YjMxN2Q1MjJiOTJ+4To=: 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.761 13:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.720 nvme0n1 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDhkMjc1ZDExNTU5NzA5MzMxYmRlOTFjNzVhMGIzN2FmODYzOTJkYzA5ZjI3ZGE5JtTWSQ==: 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5NGRiMjBiODJkMzI3YjcwOTI0YTJmY2FkN2ZjY2QyNGNkYmFiYjkzMjY0NmVkAnJTUA==: 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.720 request: 00:22:44.720 { 00:22:44.720 "name": "nvme0", 00:22:44.720 "trtype": "tcp", 00:22:44.720 "traddr": "10.0.0.1", 00:22:44.720 "adrfam": "ipv4", 00:22:44.720 "trsvcid": "4420", 00:22:44.720 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:44.720 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:44.720 "prchk_reftag": false, 00:22:44.720 "prchk_guard": false, 00:22:44.720 "hdgst": false, 00:22:44.720 "ddgst": false, 00:22:44.720 "method": "bdev_nvme_attach_controller", 00:22:44.720 "req_id": 1 00:22:44.720 } 00:22:44.720 Got JSON-RPC error response 00:22:44.720 response: 00:22:44.720 { 00:22:44.720 "code": -5, 00:22:44.720 "message": "Input/output error" 00:22:44.720 } 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:44.720 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.721 request: 00:22:44.721 { 00:22:44.721 "name": "nvme0", 00:22:44.721 "trtype": "tcp", 00:22:44.721 "traddr": "10.0.0.1", 00:22:44.721 "adrfam": "ipv4", 00:22:44.721 "trsvcid": "4420", 00:22:44.721 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:44.721 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:44.721 "prchk_reftag": false, 00:22:44.721 "prchk_guard": false, 00:22:44.721 "hdgst": false, 00:22:44.721 "ddgst": false, 00:22:44.721 "dhchap_key": "key2", 00:22:44.721 "method": "bdev_nvme_attach_controller", 00:22:44.721 "req_id": 1 00:22:44.721 } 00:22:44.721 Got JSON-RPC error response 00:22:44.721 response: 00:22:44.721 { 00:22:44.721 "code": -5, 00:22:44.721 "message": "Input/output error" 00:22:44.721 } 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.721 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.981 request: 00:22:44.981 { 00:22:44.981 "name": "nvme0", 00:22:44.981 "trtype": "tcp", 00:22:44.981 "traddr": "10.0.0.1", 00:22:44.981 "adrfam": "ipv4", 00:22:44.981 "trsvcid": "4420", 00:22:44.981 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:44.981 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:44.981 "prchk_reftag": false, 00:22:44.981 "prchk_guard": false, 00:22:44.981 "hdgst": false, 00:22:44.981 "ddgst": false, 00:22:44.981 "dhchap_key": "key1", 00:22:44.981 "dhchap_ctrlr_key": "ckey2", 00:22:44.981 "method": "bdev_nvme_attach_controller", 00:22:44.981 "req_id": 1 00:22:44.981 } 00:22:44.981 Got JSON-RPC error response 00:22:44.981 response: 00:22:44.981 { 00:22:44.981 "code": -5, 00:22:44.981 "message": "Input/output error" 00:22:44.981 } 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.981 rmmod nvme_tcp 00:22:44.981 rmmod nvme_fabrics 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 645742 ']' 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 645742 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 645742 ']' 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 645742 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 645742 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 645742' 00:22:44.981 killing process with pid 645742 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 645742 00:22:44.981 13:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 645742 00:22:45.241 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:45.241 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:45.241 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:45.241 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.241 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.241 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.241 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.241 13:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.143 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:47.143 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:47.143 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:47.143 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:47.143 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:47.143 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:22:47.402 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:47.402 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:47.403 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:47.403 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:47.403 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:47.403 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:47.403 13:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:48.340 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:48.340 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:48.340 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:48.340 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:48.340 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:48.600 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:48.600 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:48.600 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:48.600 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:22:48.600 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:22:48.600 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:22:48.600 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:22:48.600 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:22:48.600 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:22:48.600 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:22:48.600 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:22:49.539 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:22:49.539 13:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kEv /tmp/spdk.key-null.wcW /tmp/spdk.key-sha256.30z /tmp/spdk.key-sha384.5wG /tmp/spdk.key-sha512.KgG /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:22:49.539 13:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:50.915 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:50.915 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:22:50.915 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:50.915 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:50.915 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:50.915 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:50.915 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:50.915 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:50.915 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:50.915 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:50.915 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:50.915 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:50.915 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:50.915 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:50.915 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:50.915 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:50.915 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:50.915 00:22:50.915 real 0m47.130s 00:22:50.915 user 0m44.951s 00:22:50.915 sys 0m5.692s 00:22:50.915 13:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.915 13:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.915 ************************************ 00:22:50.915 END TEST nvmf_auth_host 00:22:50.915 ************************************ 00:22:50.915 13:51:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:22:50.915 13:51:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:50.915 13:51:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:50.915 13:51:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.915 13:51:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.915 ************************************ 00:22:50.915 START TEST nvmf_digest 00:22:50.915 ************************************ 00:22:50.915 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:50.916 * Looking for test storage... 00:22:50.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.916 13:51:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:53.455 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:53.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:53.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:53.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:53.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.456 13:51:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:53.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:22:53.456 00:22:53.456 --- 10.0.0.2 ping statistics --- 00:22:53.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.456 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:22:53.456 00:22:53.456 --- 10.0.0.1 ping statistics --- 00:22:53.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.456 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:53.456 ************************************ 00:22:53.456 START TEST nvmf_digest_clean 00:22:53.456 ************************************ 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=655425 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 655425 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 655425 ']' 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.456 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:53.457 [2024-07-25 13:51:50.163225] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:53.457 [2024-07-25 13:51:50.163305] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.457 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.457 [2024-07-25 13:51:50.225903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.457 [2024-07-25 13:51:50.326245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.457 [2024-07-25 13:51:50.326302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.457 [2024-07-25 13:51:50.326331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.457 [2024-07-25 13:51:50.326343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.457 [2024-07-25 13:51:50.326352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.457 [2024-07-25 13:51:50.326378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.457 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:53.716 null0 00:22:53.716 [2024-07-25 13:51:50.495868] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.716 [2024-07-25 13:51:50.520124] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=655450 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 655450 /var/tmp/bperf.sock 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 655450 ']' 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:53.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:53.716 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:53.716 [2024-07-25 13:51:50.564939] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:53.716 [2024-07-25 13:51:50.565017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655450 ] 00:22:53.716 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.716 [2024-07-25 13:51:50.623032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.716 [2024-07-25 13:51:50.729709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.975 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.975 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:22:53.975 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:53.975 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:53.975 13:51:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:54.233 13:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.233 13:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.491 nvme0n1 00:22:54.491 13:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:54.491 13:51:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:54.749 Running I/O for 2 seconds... 00:22:56.653 00:22:56.653 Latency(us) 00:22:56.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.653 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:56.653 nvme0n1 : 2.00 19642.65 76.73 0.00 0.00 6508.82 3398.16 13495.56 00:22:56.653 =================================================================================================================== 00:22:56.653 Total : 19642.65 76.73 0.00 0.00 6508.82 3398.16 13495.56 00:22:56.653 0 00:22:56.653 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:56.653 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:56.653 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:56.653 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:56.653 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:56.653 | select(.opcode=="crc32c") 00:22:56.653 | "\(.module_name) \(.executed)"' 00:22:56.912 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 655450 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 655450 ']' 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 655450 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 655450 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 655450' 00:22:56.913 killing process with pid 655450 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 655450 00:22:56.913 Received shutdown signal, test time was about 2.000000 seconds 00:22:56.913 00:22:56.913 Latency(us) 00:22:56.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.913 =================================================================================================================== 00:22:56.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.913 13:51:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 655450 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=655860 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 655860 /var/tmp/bperf.sock 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 655860 ']' 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:57.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.172 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:57.172 [2024-07-25 13:51:54.166134] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:57.172 [2024-07-25 13:51:54.166210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655860 ] 00:22:57.172 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:57.172 Zero copy mechanism will not be used. 00:22:57.172 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.430 [2024-07-25 13:51:54.224195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.430 [2024-07-25 13:51:54.326520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.430 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.430 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:22:57.430 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:57.430 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:57.430 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:57.688 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.688 13:51:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:58.256 nvme0n1 00:22:58.256 13:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:58.256 13:51:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:58.256 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:58.256 Zero copy mechanism will not be used. 00:22:58.256 Running I/O for 2 seconds... 00:23:00.795 00:23:00.795 Latency(us) 00:23:00.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.795 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:00.795 nvme0n1 : 2.00 5319.55 664.94 0.00 0.00 3003.46 995.18 4951.61 00:23:00.795 =================================================================================================================== 00:23:00.795 Total : 5319.55 664.94 0.00 0.00 3003.46 995.18 4951.61 00:23:00.795 0 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:00.795 | select(.opcode=="crc32c") 00:23:00.795 | "\(.module_name) \(.executed)"' 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 655860 00:23:00.795 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 655860 ']' 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 655860 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 655860 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 655860' 00:23:00.796 killing process with pid 655860 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 655860 00:23:00.796 Received shutdown signal, test time was about 2.000000 seconds 00:23:00.796 00:23:00.796 Latency(us) 00:23:00.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.796 =================================================================================================================== 00:23:00.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 655860 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=656271 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 656271 /var/tmp/bperf.sock 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 656271 ']' 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:00.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.796 13:51:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:01.054 [2024-07-25 13:51:57.831044] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:01.054 [2024-07-25 13:51:57.831150] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid656271 ] 00:23:01.054 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.054 [2024-07-25 13:51:57.887763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.054 [2024-07-25 13:51:57.991231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.054 13:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:01.054 13:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:01.054 13:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:01.054 13:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:01.054 13:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:01.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:01.622 13:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:01.881 nvme0n1 00:23:01.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:01.881 13:51:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:01.881 Running I/O for 2 seconds... 00:23:03.788 00:23:03.788 Latency(us) 00:23:03.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.788 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:03.788 nvme0n1 : 2.01 20346.43 79.48 0.00 0.00 6276.45 2657.85 11408.12 00:23:03.788 =================================================================================================================== 00:23:03.789 Total : 20346.43 79.48 0.00 0.00 6276.45 2657.85 11408.12 00:23:03.789 0 00:23:04.048 13:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:04.048 13:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:04.048 13:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:04.048 13:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:04.048 13:52:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:04.048 | select(.opcode=="crc32c") 00:23:04.048 | "\(.module_name) \(.executed)"' 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 656271 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 656271 ']' 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 656271 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 656271 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 656271' 00:23:04.308 killing process with pid 656271 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 656271 00:23:04.308 Received shutdown signal, test time was about 2.000000 seconds 00:23:04.308 00:23:04.308 Latency(us) 00:23:04.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.308 =================================================================================================================== 00:23:04.308 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.308 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 656271 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=656680 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 656680 /var/tmp/bperf.sock 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 656680 ']' 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:04.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:04.585 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:04.585 [2024-07-25 13:52:01.438596] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:04.586 [2024-07-25 13:52:01.438678] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid656680 ] 00:23:04.586 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:04.586 Zero copy mechanism will not be used. 00:23:04.586 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.586 [2024-07-25 13:52:01.503583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.857 [2024-07-25 13:52:01.616447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.857 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.857 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:04.857 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:04.857 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:04.857 13:52:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:05.115 13:52:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.115 13:52:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.681 nvme0n1 00:23:05.681 13:52:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:05.681 13:52:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:05.681 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:05.681 Zero copy mechanism will not be used. 00:23:05.681 Running I/O for 2 seconds... 00:23:08.215 00:23:08.215 Latency(us) 00:23:08.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.215 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:08.216 nvme0n1 : 2.00 5660.47 707.56 0.00 0.00 2818.45 2208.81 8204.14 00:23:08.216 =================================================================================================================== 00:23:08.216 Total : 5660.47 707.56 0.00 0.00 2818.45 2208.81 8204.14 00:23:08.216 0 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:08.216 | select(.opcode=="crc32c") 00:23:08.216 | "\(.module_name) \(.executed)"' 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 656680 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 656680 ']' 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 656680 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 656680 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 656680' 00:23:08.216 killing process with pid 656680 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 656680 00:23:08.216 Received shutdown signal, test time was about 2.000000 seconds 00:23:08.216 00:23:08.216 Latency(us) 00:23:08.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.216 =================================================================================================================== 00:23:08.216 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.216 13:52:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 656680 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 655425 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 655425 ']' 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 655425 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 655425 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 655425' 00:23:08.216 killing process with pid 655425 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 655425 00:23:08.216 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 655425 00:23:08.475 00:23:08.475 real 0m15.379s 00:23:08.475 user 0m30.047s 00:23:08.475 sys 0m4.377s 00:23:08.475 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:08.475 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:08.475 ************************************ 00:23:08.475 END TEST nvmf_digest_clean 00:23:08.475 ************************************ 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:08.735 ************************************ 00:23:08.735 START TEST nvmf_digest_error 00:23:08.735 ************************************ 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=657231 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 657231 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 657231 ']' 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.735 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:08.735 [2024-07-25 13:52:05.597609] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:08.735 [2024-07-25 13:52:05.597712] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.735 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.735 [2024-07-25 13:52:05.662349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.993 [2024-07-25 13:52:05.771579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.993 [2024-07-25 13:52:05.771654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.994 [2024-07-25 13:52:05.771669] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.994 [2024-07-25 13:52:05.771695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.994 [2024-07-25 13:52:05.771705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.994 [2024-07-25 13:52:05.771732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:08.994 [2024-07-25 13:52:05.828379] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:08.994 null0 00:23:08.994 [2024-07-25 13:52:05.941650] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.994 [2024-07-25 13:52:05.965927] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=657280 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 657280 /var/tmp/bperf.sock 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 657280 ']' 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:08.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.994 13:52:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:08.994 [2024-07-25 13:52:06.010621] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:08.994 [2024-07-25 13:52:06.010698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657280 ] 00:23:09.251 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.251 [2024-07-25 13:52:06.069140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.251 [2024-07-25 13:52:06.176270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.251 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.251 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:09.251 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:09.251 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:09.509 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:09.509 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.509 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:09.509 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.509 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:09.509 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:10.075 nvme0n1 00:23:10.075 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:10.075 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.075 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:10.075 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.075 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:10.075 13:52:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:10.075 Running I/O for 2 seconds... 00:23:10.075 [2024-07-25 13:52:06.971600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.075 [2024-07-25 13:52:06.971662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-25 13:52:06.971683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.075 [2024-07-25 13:52:06.985802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.075 [2024-07-25 13:52:06.985834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-25 13:52:06.985867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.075 [2024-07-25 13:52:07.001672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.075 [2024-07-25 13:52:07.001703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-25 13:52:07.001736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.075 [2024-07-25 13:52:07.012451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.075 [2024-07-25 13:52:07.012479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-25 13:52:07.012511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.075 [2024-07-25 13:52:07.025946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.075 [2024-07-25 13:52:07.025975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-25 13:52:07.026008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.075 [2024-07-25 13:52:07.037225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.075 [2024-07-25 13:52:07.037255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.075 [2024-07-25 13:52:07.037287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.075 [2024-07-25 13:52:07.050765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.076 [2024-07-25 13:52:07.050809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-25 13:52:07.050827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.076 [2024-07-25 13:52:07.064003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.076 [2024-07-25 13:52:07.064033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-25 13:52:07.064075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.076 [2024-07-25 13:52:07.075335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.076 [2024-07-25 13:52:07.075379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-25 13:52:07.075402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.076 [2024-07-25 13:52:07.088233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.076 [2024-07-25 13:52:07.088265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-25 13:52:07.088298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.076 [2024-07-25 13:52:07.103802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.076 [2024-07-25 13:52:07.103832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.076 [2024-07-25 13:52:07.103864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.115442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.115471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.115501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.129885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.129915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.129948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.142732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.142762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.142794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.154185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.154215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.154247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.166810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.166840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.166872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.179645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.179690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.179706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.190993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.191029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.191071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.204693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.204722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.204753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.219862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.219892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.219923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.230519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.230551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.230568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.246624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.246671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.246687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.260527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.260555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.260587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.273972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.274000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.274031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.287187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.287219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.287237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.297960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.297988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.298019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.312351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.312380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.312412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.326246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.326277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.326294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.337202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.337230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.337261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.350036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.350085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.350101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.336 [2024-07-25 13:52:07.362923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.336 [2024-07-25 13:52:07.362952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.336 [2024-07-25 13:52:07.362984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.597 [2024-07-25 13:52:07.376835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.597 [2024-07-25 13:52:07.376864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.597 [2024-07-25 13:52:07.376895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.597 [2024-07-25 13:52:07.389344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.597 [2024-07-25 13:52:07.389375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.597 [2024-07-25 13:52:07.389393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.597 [2024-07-25 13:52:07.400815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.597 [2024-07-25 13:52:07.400842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.597 [2024-07-25 13:52:07.400873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.597 [2024-07-25 13:52:07.414221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.597 [2024-07-25 13:52:07.414259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.597 [2024-07-25 13:52:07.414292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.425117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.425146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.425176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.437900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.437929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.437959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.451001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.451033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.451078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.464476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.464503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.464533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.475829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.475859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.475890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.489764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.489791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.489822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.501379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.501424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.501440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.513711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.513740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.513771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.526476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.526504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.526535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.539023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.539052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.539091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.551633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.551664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.551697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.564892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.564920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.564952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.575686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.575731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.575749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.589810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.589840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.589872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.600897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.600925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.600955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.615880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.615908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.615939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.598 [2024-07-25 13:52:07.629139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.598 [2024-07-25 13:52:07.629170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.598 [2024-07-25 13:52:07.629195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.858 [2024-07-25 13:52:07.640563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.858 [2024-07-25 13:52:07.640592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.858 [2024-07-25 13:52:07.640623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.858 [2024-07-25 13:52:07.652858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.858 [2024-07-25 13:52:07.652886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.858 [2024-07-25 13:52:07.652917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.858 [2024-07-25 13:52:07.665631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.858 [2024-07-25 13:52:07.665661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.858 [2024-07-25 13:52:07.665709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.858 [2024-07-25 13:52:07.678265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.858 [2024-07-25 13:52:07.678309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.858 [2024-07-25 13:52:07.678326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.858 [2024-07-25 13:52:07.691491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.858 [2024-07-25 13:52:07.691521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.858 [2024-07-25 13:52:07.691552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.858 [2024-07-25 13:52:07.703696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.858 [2024-07-25 13:52:07.703737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.858 [2024-07-25 13:52:07.703753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.858 [2024-07-25 13:52:07.715873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.858 [2024-07-25 13:52:07.715901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.715932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.728460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.728488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.728519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.743678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.743714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.743747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.756840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.756870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.756901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.767949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.767977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.768007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.781752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.781796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.781813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.794180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.794211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.794238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.805187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.805231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.805248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.820027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.820079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.820098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.834007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.834035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.834075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.846464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.846493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.846524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.858358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.858407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.858424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.872342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.872388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.872404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.859 [2024-07-25 13:52:07.885308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:10.859 [2024-07-25 13:52:07.885339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.859 [2024-07-25 13:52:07.885377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.119 [2024-07-25 13:52:07.899594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.119 [2024-07-25 13:52:07.899641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.119 [2024-07-25 13:52:07.899659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.119 [2024-07-25 13:52:07.911950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.119 [2024-07-25 13:52:07.911978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.119 [2024-07-25 13:52:07.912008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.119 [2024-07-25 13:52:07.925368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.119 [2024-07-25 13:52:07.925411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.119 [2024-07-25 13:52:07.925427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.119 [2024-07-25 13:52:07.940791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.119 [2024-07-25 13:52:07.940821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.119 [2024-07-25 13:52:07.940855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.119 [2024-07-25 13:52:07.955028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.119 [2024-07-25 13:52:07.955066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.119 [2024-07-25 13:52:07.955086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.119 [2024-07-25 13:52:07.966575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:07.966619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:07.966650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:07.979014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:07.979057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:07.979082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:07.990882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:07.990913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:07.990930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.004773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.004803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.004835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.016674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.016703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.016735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.028628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.028658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.028690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.042471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.042516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.042533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.056747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.056778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.056796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.067782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.067810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.067842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.082516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.082560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.082577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.099502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.099532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.099563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.115796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.115826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.115858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.126120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.126164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.126180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.140176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.140206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.140223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.120 [2024-07-25 13:52:08.153470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.120 [2024-07-25 13:52:08.153502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.120 [2024-07-25 13:52:08.153519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.166491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.381 [2024-07-25 13:52:08.166522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.381 [2024-07-25 13:52:08.166540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.177827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.381 [2024-07-25 13:52:08.177855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.381 [2024-07-25 13:52:08.177887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.192234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.381 [2024-07-25 13:52:08.192265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.381 [2024-07-25 13:52:08.192297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.207995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.381 [2024-07-25 13:52:08.208026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.381 [2024-07-25 13:52:08.208051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.218370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.381 [2024-07-25 13:52:08.218402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.381 [2024-07-25 13:52:08.218419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.232801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.381 [2024-07-25 13:52:08.232831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.381 [2024-07-25 13:52:08.232862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.246685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.381 [2024-07-25 13:52:08.246730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.381 [2024-07-25 13:52:08.246747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.259380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.381 [2024-07-25 13:52:08.259427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.381 [2024-07-25 13:52:08.259443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.271699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.381 [2024-07-25 13:52:08.271730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.381 [2024-07-25 13:52:08.271763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.381 [2024-07-25 13:52:08.284634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.284665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.284698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.382 [2024-07-25 13:52:08.297634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.297665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.297698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.382 [2024-07-25 13:52:08.308976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.309017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.309036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.382 [2024-07-25 13:52:08.323316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.323348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.323366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.382 [2024-07-25 13:52:08.333831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.333860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.333891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.382 [2024-07-25 13:52:08.348506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.348536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.348567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.382 [2024-07-25 13:52:08.359532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.359561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.359592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.382 [2024-07-25 13:52:08.375308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.375340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.375374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.382 [2024-07-25 13:52:08.390131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.390162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.390180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.382 [2024-07-25 13:52:08.401253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.382 [2024-07-25 13:52:08.401282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.382 [2024-07-25 13:52:08.401314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.416844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.416889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.416906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.430948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.430991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.431009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.443632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.443674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.443692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.457539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.457567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.457598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.469675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.469704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.469735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.481858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.481887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.481925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.493726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.493754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.493784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.506920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.506948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.506979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.523383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.523428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.523444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.533519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.533562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.533587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.548546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.548577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.548609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.563375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.563405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.563421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.575509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.575542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.575575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.588606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.588637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.588669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.600050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.600101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.600118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.612951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.613011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.613041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.629541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.629572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.629605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.643 [2024-07-25 13:52:08.641429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.643 [2024-07-25 13:52:08.641460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.643 [2024-07-25 13:52:08.641490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.644 [2024-07-25 13:52:08.652970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.644 [2024-07-25 13:52:08.653013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.644 [2024-07-25 13:52:08.653031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.644 [2024-07-25 13:52:08.666817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.644 [2024-07-25 13:52:08.666846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.644 [2024-07-25 13:52:08.666878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.904 [2024-07-25 13:52:08.679181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.904 [2024-07-25 13:52:08.679214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.904 [2024-07-25 13:52:08.679231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.904 [2024-07-25 13:52:08.692692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.904 [2024-07-25 13:52:08.692737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.692753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.703669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.703711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.703728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.716403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.716446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.716463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.730009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.730038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.730077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.745619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.745649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.745682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.755629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.755658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.755698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.770189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.770220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.770236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.784277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.784307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.784340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.796392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.796423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.796456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.808963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.808993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.809027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.820208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.820252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.820270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.832612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.832655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.832672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.844977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.845006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.845038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.858620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.858663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.858680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.870477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.870516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.870549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.883459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.883489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.883506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.896497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.896525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.896556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.908621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.908648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.908679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.920886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.920916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.920947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.905 [2024-07-25 13:52:08.932994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:11.905 [2024-07-25 13:52:08.933022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.905 [2024-07-25 13:52:08.933052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.166 [2024-07-25 13:52:08.945937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16bfcb0) 00:23:12.166 [2024-07-25 13:52:08.945967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.166 [2024-07-25 13:52:08.945999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.166 00:23:12.166 Latency(us) 00:23:12.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.166 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:12.166 nvme0n1 : 2.00 19594.06 76.54 0.00 0.00 6523.90 3592.34 21942.42 00:23:12.166 =================================================================================================================== 00:23:12.166 Total : 19594.06 76.54 0.00 0.00 6523.90 3592.34 21942.42 00:23:12.166 0 00:23:12.166 13:52:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:12.166 13:52:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:12.166 13:52:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:12.166 | .driver_specific 00:23:12.166 | .nvme_error 00:23:12.166 | .status_code 00:23:12.166 | .command_transient_transport_error' 00:23:12.166 13:52:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 657280 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 657280 ']' 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 657280 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 657280 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 657280' 00:23:12.425 killing process with pid 657280 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 657280 00:23:12.425 Received shutdown signal, test time was about 2.000000 seconds 00:23:12.425 00:23:12.425 Latency(us) 00:23:12.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.425 =================================================================================================================== 00:23:12.425 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.425 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 657280 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=657747 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 657747 /var/tmp/bperf.sock 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 657747 ']' 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:12.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.683 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:12.684 [2024-07-25 13:52:09.550609] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:12.684 [2024-07-25 13:52:09.550681] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657747 ] 00:23:12.684 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:12.684 Zero copy mechanism will not be used. 00:23:12.684 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.684 [2024-07-25 13:52:09.607450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.684 [2024-07-25 13:52:09.711880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.942 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.942 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:12.942 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:12.942 13:52:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:13.199 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:13.199 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.199 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:13.199 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.199 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:13.199 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:13.767 nvme0n1 00:23:13.767 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:13.768 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.768 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:13.768 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.768 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:13.768 13:52:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:13.768 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:13.768 Zero copy mechanism will not be used. 00:23:13.768 Running I/O for 2 seconds... 00:23:13.768 [2024-07-25 13:52:10.689934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.689998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.690019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.696640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.696671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.696704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.703515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.703548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.703566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.710161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.710193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.710210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.716036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.716074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.716092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.721000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.721046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.721073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.725191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.725222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.725239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.728205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.728234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.728250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.731798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.731844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.731861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.735918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.735949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.735966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.740619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.740667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.740683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.745302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.745346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.745362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.750019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.750071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.750089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.754742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.754770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.754787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.759484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.759528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.759544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.764482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.764527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.764543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.770105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.770155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.770176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.775705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.775752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.775769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.781824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.781854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.781886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.787593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.787638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.768 [2024-07-25 13:52:10.787655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.768 [2024-07-25 13:52:10.793230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.768 [2024-07-25 13:52:10.793261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.769 [2024-07-25 13:52:10.793279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.769 [2024-07-25 13:52:10.799683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:13.769 [2024-07-25 13:52:10.799715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.769 [2024-07-25 13:52:10.799732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.807303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.807340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.030 [2024-07-25 13:52:10.807359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.811899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.811931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.030 [2024-07-25 13:52:10.811949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.816556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.816587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.030 [2024-07-25 13:52:10.816604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.822795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.822840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.030 [2024-07-25 13:52:10.822857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.828655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.828687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.030 [2024-07-25 13:52:10.828705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.834602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.834652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.030 [2024-07-25 13:52:10.834674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.840039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.840094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.030 [2024-07-25 13:52:10.840113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.844997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.845029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.030 [2024-07-25 13:52:10.845046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.848205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.848235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.030 [2024-07-25 13:52:10.848252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.030 [2024-07-25 13:52:10.852986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.030 [2024-07-25 13:52:10.853032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.853049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.857924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.857967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.857982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.863355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.863386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.863403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.868199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.868229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.868246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.873633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.873677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.873693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.878897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.878931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.878964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.884297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.884328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.884359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.890125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.890170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.890187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.895912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.895956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.895972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.901384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.901418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.901454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.906598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.906629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.906662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.912440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.912472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.912490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.918240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.918272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.918289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.923955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.923986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.924018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.929920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.929951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.929969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.935903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.935934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.935951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.942985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.943017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.943034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.949114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.949161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.949179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.955539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.955573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.955608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.961218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.961249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.961266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.967716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.967747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.967765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.974433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.974465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.974482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.980799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.980845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.980868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.031 [2024-07-25 13:52:10.986844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.031 [2024-07-25 13:52:10.986876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.031 [2024-07-25 13:52:10.986893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:10.992659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:10.992690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:10.992708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:10.998610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:10.998642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:10.998660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.005231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.005263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.005281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.011360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.011407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.011424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.017031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.017069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.017088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.022208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.022240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.022257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.027624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.027656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.027673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.032673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.032709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.032727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.037207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.037237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.037254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.041797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.041827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.041844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.046471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.046502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.046519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.051344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.051374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.051391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.056462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.056492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.056509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.032 [2024-07-25 13:52:11.061897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.032 [2024-07-25 13:52:11.061932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.032 [2024-07-25 13:52:11.061950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.067099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.067130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.067147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.072730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.072761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.072784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.078422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.078453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.078470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.085168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.085198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.085216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.091644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.091675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.091693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.097613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.097648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.097666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.101700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.101731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.101748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.106127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.106171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.106187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.111924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.111954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.111970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.119164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.119196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.119213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.127632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.127669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.293 [2024-07-25 13:52:11.127702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.293 [2024-07-25 13:52:11.134858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.293 [2024-07-25 13:52:11.134888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.134905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.141960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.142005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.142021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.149073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.149102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.149132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.155617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.155647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.155679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.161852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.161898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.161915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.166963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.167006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.167023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.171694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.171725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.171742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.176009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.176052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.176078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.180772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.180801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.180818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.185236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.185265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.185281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.189953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.190002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.190019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.194246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.194275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.194292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.198813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.198845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.198863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.203339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.203384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.203401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.208085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.208115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.208133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.212818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.212864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.212881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.217401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.217450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.217473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.222111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.222141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.222159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.226788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.226819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.226836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.231310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.231341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.231357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.235911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.235941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.235957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.240447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.240490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.240506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.245242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.245272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.245289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.250149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.250183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.250200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.294 [2024-07-25 13:52:11.254918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.294 [2024-07-25 13:52:11.254948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.294 [2024-07-25 13:52:11.254964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.259800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.259834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.259866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.264757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.264788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.264805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.269527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.269556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.269591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.274350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.274379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.274396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.279213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.279242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.279259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.283802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.283831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.283848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.288438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.288480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.288496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.293926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.293961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.293980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.300673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.300704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.300735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.308191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.308237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.308254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.316690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.316722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.316739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.295 [2024-07-25 13:52:11.324429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.295 [2024-07-25 13:52:11.324461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.295 [2024-07-25 13:52:11.324479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.556 [2024-07-25 13:52:11.332254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.556 [2024-07-25 13:52:11.332299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.556 [2024-07-25 13:52:11.332323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.556 [2024-07-25 13:52:11.339967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.556 [2024-07-25 13:52:11.340012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.556 [2024-07-25 13:52:11.340030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.556 [2024-07-25 13:52:11.347765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.556 [2024-07-25 13:52:11.347815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.556 [2024-07-25 13:52:11.347832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.556 [2024-07-25 13:52:11.355830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.556 [2024-07-25 13:52:11.355877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.556 [2024-07-25 13:52:11.355894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.556 [2024-07-25 13:52:11.363592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.556 [2024-07-25 13:52:11.363639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.556 [2024-07-25 13:52:11.363656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.556 [2024-07-25 13:52:11.371327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.556 [2024-07-25 13:52:11.371359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.556 [2024-07-25 13:52:11.371382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.556 [2024-07-25 13:52:11.378911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.556 [2024-07-25 13:52:11.378957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.556 [2024-07-25 13:52:11.378974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.556 [2024-07-25 13:52:11.386434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.556 [2024-07-25 13:52:11.386466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.556 [2024-07-25 13:52:11.386497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.556 [2024-07-25 13:52:11.394006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.556 [2024-07-25 13:52:11.394037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.556 [2024-07-25 13:52:11.394055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.401495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.401527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.401544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.409323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.409369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.409386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.415745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.415776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.415793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.421639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.421670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.421687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.427537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.427567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.427585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.432882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.432922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.432941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.438257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.438288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.438306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.443658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.443688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.443705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.449161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.449192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.449209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.454730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.454761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.454783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.460462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.460494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.460511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.465976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.466007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.466025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.471518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.471550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.471567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.477040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.477078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.477097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.482291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.482322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.482339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.487082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.487113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.487130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.491834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.491864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.491885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.496485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.496514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.496532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.501120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.501149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.501165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.505796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.505826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.505843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.510424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.510453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.510470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.515068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.515098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.515115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.519629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.519664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.519682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.557 [2024-07-25 13:52:11.524486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.557 [2024-07-25 13:52:11.524516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.557 [2024-07-25 13:52:11.524533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.530293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.530324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.530341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.535217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.535247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.535264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.539974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.540007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.540025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.544770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.544801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.544818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.550635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.550665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.550682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.555311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.555353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.555371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.560279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.560309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.560326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.565203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.565234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.565250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.570779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.570813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.570831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.576287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.576318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.576335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.582470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.582501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.582520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.558 [2024-07-25 13:52:11.588435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.558 [2024-07-25 13:52:11.588466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.558 [2024-07-25 13:52:11.588484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.594295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.594332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.594351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.600577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.600609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.600626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.606311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.606346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.606364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.609525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.609555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.609579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.615332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.615377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.615393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.621421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.621452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.621469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.627007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.627038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.627054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.632760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.632807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.632824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.638149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.638181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.638199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.643634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.643663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.643694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.649529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.649575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.649592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.655210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.655241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.655259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.661208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.661248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.661267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.667982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.668012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.668029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.675517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.675563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.675579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.683191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.683223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.683241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.690822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.690866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.690884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.698644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.698688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.698703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.706478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.706516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.818 [2024-07-25 13:52:11.706547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.818 [2024-07-25 13:52:11.714651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.818 [2024-07-25 13:52:11.714681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.714711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.722204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.722235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.722253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.729490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.729521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.729538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.737388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.737444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.737461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.745079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.745122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.745140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.752845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.752873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.752904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.760579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.760608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.760638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.768372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.768403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.768434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.775935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.775966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.775982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.783212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.783244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.783263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.790815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.790860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.790882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.798745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.798788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.798804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.806536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.806581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.806597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.813627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.813658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.813694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.819172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.819203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.819221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.824838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.824869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.824886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.830696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.830727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.830745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.836114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.836146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.836163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.841383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.841414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.841431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.844637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.844681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.844698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.819 [2024-07-25 13:52:11.850472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:14.819 [2024-07-25 13:52:11.850502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.819 [2024-07-25 13:52:11.850534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.078 [2024-07-25 13:52:11.856803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.078 [2024-07-25 13:52:11.856833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.078 [2024-07-25 13:52:11.856865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.078 [2024-07-25 13:52:11.863113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.078 [2024-07-25 13:52:11.863160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.078 [2024-07-25 13:52:11.863178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.078 [2024-07-25 13:52:11.868634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.078 [2024-07-25 13:52:11.868677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.078 [2024-07-25 13:52:11.868695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.078 [2024-07-25 13:52:11.874825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.078 [2024-07-25 13:52:11.874855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.078 [2024-07-25 13:52:11.874888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.078 [2024-07-25 13:52:11.880429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.078 [2024-07-25 13:52:11.880474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.078 [2024-07-25 13:52:11.880490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.078 [2024-07-25 13:52:11.885891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.078 [2024-07-25 13:52:11.885919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.078 [2024-07-25 13:52:11.885951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.078 [2024-07-25 13:52:11.891933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.891964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.891987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.899326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.899370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.899387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.905500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.905529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.905562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.911730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.911761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.911779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.917552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.917598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.917615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.923648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.923680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.923698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.930439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.930483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.930499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.936045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.936082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.936115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.941068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.941098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.941115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.946450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.946499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.946536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.952793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.952824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.952845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.960482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.960513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.960531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.965433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.965464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.965497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.971354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.971384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.971401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.977892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.977923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.977941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.985086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.985117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.985134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.990676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.990724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.990741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:11.996140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:11.996170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:11.996187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:12.000991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:12.001037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:12.001054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:12.006893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:12.006923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:12.006955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:12.012411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:12.012455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:12.012472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:12.018664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:12.018707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:12.018724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:12.023503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:12.023534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:12.023551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:12.028294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:12.028323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.079 [2024-07-25 13:52:12.028339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.079 [2024-07-25 13:52:12.033184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.079 [2024-07-25 13:52:12.033227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.033242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.037985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.038030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.038047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.042483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.042528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.042550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.047106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.047151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.047167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.052477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.052522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.052538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.059463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.059493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.059509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.067610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.067641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.067657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.075697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.075741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.075757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.083889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.083919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.083935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.092290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.092335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.092352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.100929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.100959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.100994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.080 [2024-07-25 13:52:12.109281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.080 [2024-07-25 13:52:12.109317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.080 [2024-07-25 13:52:12.109349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.117798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.117845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.117862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.126256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.126284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.126316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.134566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.134609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.134625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.142432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.142476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.142492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.150375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.150417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.150433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.158234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.158263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.158294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.165771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.165800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.165831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.173105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.173135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.173168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.178931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.178975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.178993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.183612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.339 [2024-07-25 13:52:12.183640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.339 [2024-07-25 13:52:12.183671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.339 [2024-07-25 13:52:12.188373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.188417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.188433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.193221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.193249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.193265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.198009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.198051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.198074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.202967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.202995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.203026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.207905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.207933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.207948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.212828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.212856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.212886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.217681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.217730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.217748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.222535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.222581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.222597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.227271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.227300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.227332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.232190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.232220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.232236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.237931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.237961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.237992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.242972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.243003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.243028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.248490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.248521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.248539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.255453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.255488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.255507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.260522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.260553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.260571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.265927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.265957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.265974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.271188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.271219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.271236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.276713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.276760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.276777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.283537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.283568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.283585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.290866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.290898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.290915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.297082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.297112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.297130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.301793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.301824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.301857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.306264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.306294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.306327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.311596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.340 [2024-07-25 13:52:12.311641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.340 [2024-07-25 13:52:12.311663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.340 [2024-07-25 13:52:12.316981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.317027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.317045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.322265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.322312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.322329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.327296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.327327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.327344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.332125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.332169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.332185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.337024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.337080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.337098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.342047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.342099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.342116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.347041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.347083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.347101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.351731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.351761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.351779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.357378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.357428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.357446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.362304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.362334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.362374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.367280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.367309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.367341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.341 [2024-07-25 13:52:12.372066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.341 [2024-07-25 13:52:12.372096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.341 [2024-07-25 13:52:12.372114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.377276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.377307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.377324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.382735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.382764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.382781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.387518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.387562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.387579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.392362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.392392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.392409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.398001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.398031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.398048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.405099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.405145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.405162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.412486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.412518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.412536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.420124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.420155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.420173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.427978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.428009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.428036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.434670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.434701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.434719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.440922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.440954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.440971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.447906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.447937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.447955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.451851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.451895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.451921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.456936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.456965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.457014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.462915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.462945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.462977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.469679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.469708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.469740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.474565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.474595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.602 [2024-07-25 13:52:12.474627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.602 [2024-07-25 13:52:12.479791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.602 [2024-07-25 13:52:12.479823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.479840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.485273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.485304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.485322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.491988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.492032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.492049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.497696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.497724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.497755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.502989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.503019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.503051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.507879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.507909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.507925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.513663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.513694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.513727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.518704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.518748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.518764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.523699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.523728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.523759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.528434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.528477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.528493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.533240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.533282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.533298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.537819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.537847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.537863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.542356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.542385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.542400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.546919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.546949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.546970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.551486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.551514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.551545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.556182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.556227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.556244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.560868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.560895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.560910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.565540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.565582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.565598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.570330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.570359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.570390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.576162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.576191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.576225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.581028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.581057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.581099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.585755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.585783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.585800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.590471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.590506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.590524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.595664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.595695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.603 [2024-07-25 13:52:12.595712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.603 [2024-07-25 13:52:12.600603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.603 [2024-07-25 13:52:12.600647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.604 [2024-07-25 13:52:12.600664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.604 [2024-07-25 13:52:12.605301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.604 [2024-07-25 13:52:12.605332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.604 [2024-07-25 13:52:12.605349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.604 [2024-07-25 13:52:12.610110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.604 [2024-07-25 13:52:12.610140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.604 [2024-07-25 13:52:12.610157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.604 [2024-07-25 13:52:12.614886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.604 [2024-07-25 13:52:12.614914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.604 [2024-07-25 13:52:12.614930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.604 [2024-07-25 13:52:12.619661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.604 [2024-07-25 13:52:12.619691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.604 [2024-07-25 13:52:12.619724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.604 [2024-07-25 13:52:12.624390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.604 [2024-07-25 13:52:12.624419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.604 [2024-07-25 13:52:12.624435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.604 [2024-07-25 13:52:12.629045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.604 [2024-07-25 13:52:12.629083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.604 [2024-07-25 13:52:12.629101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.604 [2024-07-25 13:52:12.633806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.604 [2024-07-25 13:52:12.633836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.604 [2024-07-25 13:52:12.633868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.638379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.638410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.638426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.643938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.643969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.643987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.649742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.649772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.649788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.656487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.656517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.656534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.662716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.662746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.662778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.668430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.668473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.668489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.673950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.673979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.673996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.679084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.679134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.679156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.684135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.684163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.684179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.862 [2024-07-25 13:52:12.688990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12cc290) 00:23:15.862 [2024-07-25 13:52:12.689019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.862 [2024-07-25 13:52:12.689036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.862 00:23:15.862 Latency(us) 00:23:15.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.862 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:15.862 nvme0n1 : 2.00 5404.76 675.60 0.00 0.00 2955.22 667.50 8786.68 00:23:15.862 =================================================================================================================== 00:23:15.863 Total : 5404.76 675.60 0.00 0.00 2955.22 667.50 8786.68 00:23:15.863 0 00:23:15.863 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:15.863 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:15.863 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:15.863 | .driver_specific 00:23:15.863 | .nvme_error 00:23:15.863 | .status_code 00:23:15.863 | .command_transient_transport_error' 00:23:15.863 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:16.122 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 349 > 0 )) 00:23:16.122 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 657747 00:23:16.122 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 657747 ']' 00:23:16.122 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 657747 00:23:16.122 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:16.122 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.122 13:52:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 657747 00:23:16.122 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:16.122 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:16.122 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 657747' 00:23:16.122 killing process with pid 657747 00:23:16.122 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 657747 00:23:16.122 Received shutdown signal, test time was about 2.000000 seconds 00:23:16.122 00:23:16.122 Latency(us) 00:23:16.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.122 =================================================================================================================== 00:23:16.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.122 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 657747 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=658189 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 658189 /var/tmp/bperf.sock 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 658189 ']' 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:16.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.381 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:16.381 [2024-07-25 13:52:13.315191] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:16.381 [2024-07-25 13:52:13.315266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658189 ] 00:23:16.381 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.381 [2024-07-25 13:52:13.371985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.639 [2024-07-25 13:52:13.477615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.639 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.639 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:16.639 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:16.639 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:16.897 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:16.897 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.897 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:16.897 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.897 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:16.897 13:52:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:17.465 nvme0n1 00:23:17.465 13:52:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:17.465 13:52:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.465 13:52:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:17.465 13:52:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.465 13:52:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:17.465 13:52:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:17.465 Running I/O for 2 seconds... 00:23:17.465 [2024-07-25 13:52:14.435085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ee5c8 00:23:17.465 [2024-07-25 13:52:14.435982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.465 [2024-07-25 13:52:14.436036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:17.465 [2024-07-25 13:52:14.447188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190f81e0 00:23:17.465 [2024-07-25 13:52:14.447942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.465 [2024-07-25 13:52:14.447987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:17.465 [2024-07-25 13:52:14.458360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190eaef0 00:23:17.465 [2024-07-25 13:52:14.459503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.465 [2024-07-25 13:52:14.459533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:17.465 [2024-07-25 13:52:14.469704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ec840 00:23:17.465 [2024-07-25 13:52:14.470734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.465 [2024-07-25 13:52:14.470775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:17.465 [2024-07-25 13:52:14.481333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190fd640 00:23:17.465 [2024-07-25 13:52:14.481992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.465 [2024-07-25 13:52:14.482022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:17.465 [2024-07-25 13:52:14.495706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190e23b8 00:23:17.465 [2024-07-25 13:52:14.497635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.465 [2024-07-25 13:52:14.497679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.504211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190fcdd0 00:23:17.725 [2024-07-25 13:52:14.504958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.505007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.515153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190e3060 00:23:17.725 [2024-07-25 13:52:14.515961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.516005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.529943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190fcdd0 00:23:17.725 [2024-07-25 13:52:14.531578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.531620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.539231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190f7970 00:23:17.725 [2024-07-25 13:52:14.540306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.540349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.550724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ec840 00:23:17.725 [2024-07-25 13:52:14.551448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.551480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.564031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ee190 00:23:17.725 [2024-07-25 13:52:14.565362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.565406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.574790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190fb048 00:23:17.725 [2024-07-25 13:52:14.575978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.576019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.586869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190de470 00:23:17.725 [2024-07-25 13:52:14.588218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.588260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.596471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190f46d0 00:23:17.725 [2024-07-25 13:52:14.597196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.597228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.608430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190e9e10 00:23:17.725 [2024-07-25 13:52:14.609350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.609393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.620258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190eaef0 00:23:17.725 [2024-07-25 13:52:14.621501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.621543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.634119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190f9f68 00:23:17.725 [2024-07-25 13:52:14.635889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.725 [2024-07-25 13:52:14.635932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:17.725 [2024-07-25 13:52:14.642394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190eea00 00:23:17.725 [2024-07-25 13:52:14.643308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.643350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.656462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190e0630 00:23:17.726 [2024-07-25 13:52:14.657940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.657983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.667634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190df550 00:23:17.726 [2024-07-25 13:52:14.668893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.668949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.678576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ecc78 00:23:17.726 [2024-07-25 13:52:14.679712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.679753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.689537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190f6cc8 00:23:17.726 [2024-07-25 13:52:14.690629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.690659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.700570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ea248 00:23:17.726 [2024-07-25 13:52:14.701525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.701569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.711739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190e0ea0 00:23:17.726 [2024-07-25 13:52:14.712479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.712520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.726246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190fdeb0 00:23:17.726 [2024-07-25 13:52:14.727813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.727855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.734154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190eb760 00:23:17.726 [2024-07-25 13:52:14.734865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.734907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.748261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190e8d30 00:23:17.726 [2024-07-25 13:52:14.749489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.726 [2024-07-25 13:52:14.749532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:17.726 [2024-07-25 13:52:14.759908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190dece0 00:23:17.985 [2024-07-25 13:52:14.761314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.761357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.770858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190f7538 00:23:17.985 [2024-07-25 13:52:14.771987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.772015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.783489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.783776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.783803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.797367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.797665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.797711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.811148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.811372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.811418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.824721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.824962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.825004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.838494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.838778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.838819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.852154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.852373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.852399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.865822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.866121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.866148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.879585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.879837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.879866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.893364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.893588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.893614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.906990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.907230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.907272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.920811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.921113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.921156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.934625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.934929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.934971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.985 [2024-07-25 13:52:14.948411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.985 [2024-07-25 13:52:14.948636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.985 [2024-07-25 13:52:14.948664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.986 [2024-07-25 13:52:14.961726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.986 [2024-07-25 13:52:14.962031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.986 [2024-07-25 13:52:14.962066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.986 [2024-07-25 13:52:14.975432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.986 [2024-07-25 13:52:14.975718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.986 [2024-07-25 13:52:14.975760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.986 [2024-07-25 13:52:14.989285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.986 [2024-07-25 13:52:14.989514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.986 [2024-07-25 13:52:14.989555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.986 [2024-07-25 13:52:15.002958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.986 [2024-07-25 13:52:15.003196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.986 [2024-07-25 13:52:15.003239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.986 [2024-07-25 13:52:15.016813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:17.986 [2024-07-25 13:52:15.017071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.986 [2024-07-25 13:52:15.017112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.030466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.030760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.030804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.044245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.044470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.044512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.057995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.058302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.058344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.071684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.071998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.072024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.085406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.085695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.085737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.099237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.099484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.099511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.112942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.113224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.113266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.126744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.127081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.127110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.140505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.140792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.140835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.154364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.154659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.154701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.168234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.168459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.168506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.181924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.182163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.182195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.195691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.195984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.196027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.209274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.209481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.209508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.222926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.223158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.223202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.236498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.236743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.236788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.250444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.250725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.250768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.264299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.264528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.264570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.246 [2024-07-25 13:52:15.278083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.246 [2024-07-25 13:52:15.278351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.246 [2024-07-25 13:52:15.278379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.291797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.292102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.292151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.305504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.305741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.305783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.319217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.319500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.319544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.333025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.333280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.333308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.346495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.346736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.346768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.360188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.360409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.360452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.373799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.374153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.374182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.387563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.387805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.387848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.401309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.401527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.401569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.414925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.415177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.415204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.428665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.428885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.428942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.442375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.442589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.442630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.455795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.456011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.456038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.469455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.469703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.469736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.482777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.483007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.483050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.496458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.496670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.496716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.510121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.510325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.510353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.523831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.524057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.524096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.507 [2024-07-25 13:52:15.537468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.507 [2024-07-25 13:52:15.537731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.507 [2024-07-25 13:52:15.537759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.766 [2024-07-25 13:52:15.551016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.766 [2024-07-25 13:52:15.551234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.766 [2024-07-25 13:52:15.551261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.766 [2024-07-25 13:52:15.564651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.766 [2024-07-25 13:52:15.564868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.766 [2024-07-25 13:52:15.564895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.766 [2024-07-25 13:52:15.578304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.766 [2024-07-25 13:52:15.578534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.766 [2024-07-25 13:52:15.578577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.766 [2024-07-25 13:52:15.591983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.766 [2024-07-25 13:52:15.592255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.766 [2024-07-25 13:52:15.592283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.766 [2024-07-25 13:52:15.605759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.766 [2024-07-25 13:52:15.606000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.766 [2024-07-25 13:52:15.606043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.766 [2024-07-25 13:52:15.619322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.766 [2024-07-25 13:52:15.619578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.766 [2024-07-25 13:52:15.619606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.766 [2024-07-25 13:52:15.633204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.766 [2024-07-25 13:52:15.633421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.766 [2024-07-25 13:52:15.633467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.766 [2024-07-25 13:52:15.647038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.766 [2024-07-25 13:52:15.647300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.766 [2024-07-25 13:52:15.647335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.766 [2024-07-25 13:52:15.660650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.766 [2024-07-25 13:52:15.660864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.660910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.674333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.674613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.674655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.688092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.688350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.688377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.701667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.701900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.701942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.715335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.715618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.715660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.728990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.729203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.729236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.742217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.742533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.742576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.755920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.756161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.756189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.769577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.769800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.769846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.783268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.783504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.783546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.767 [2024-07-25 13:52:15.796891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:18.767 [2024-07-25 13:52:15.797103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.767 [2024-07-25 13:52:15.797131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.810352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.810565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.810592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.823985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.824262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.824290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.837722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.837935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.837976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.851258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.851510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.851556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.864965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.865175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.865203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.878517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.878734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.878777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.892207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.892446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.892474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.905979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.906280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.906309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.919589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.919850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.919878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.933389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.933630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.933676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.946996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.947244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.947272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.960665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.960883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.960929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.974301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.974567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.974595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:15.987928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:15.988140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:15.988168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:16.001286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:16.001499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:16.001555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:16.014810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:16.015073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:16.015101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:16.028507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:16.028723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:16.028769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.027 [2024-07-25 13:52:16.042215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.027 [2024-07-25 13:52:16.042420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.027 [2024-07-25 13:52:16.042461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.028 [2024-07-25 13:52:16.055713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.028 [2024-07-25 13:52:16.055916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.028 [2024-07-25 13:52:16.055963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.069126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.069369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.069412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.082716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.082938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.082969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.096451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.096687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.096718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.109983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.110268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.110297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.123696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.123934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.123961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.137238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.137475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.137518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.150826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.151055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.151089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.164406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.164621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.164662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.177984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.178243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.178271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.191668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.191884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.191939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.205309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.205538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.205580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.218971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.219255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.219283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.232570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.232803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.232845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.246209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.246416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.246449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.259563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.259783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.259831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.273202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.273424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.273451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.286865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.287106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.287139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.300488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.300710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.300752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.287 [2024-07-25 13:52:16.314128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.287 [2024-07-25 13:52:16.314332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.287 [2024-07-25 13:52:16.314378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.545 [2024-07-25 13:52:16.327667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.545 [2024-07-25 13:52:16.327887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.545 [2024-07-25 13:52:16.327913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.545 [2024-07-25 13:52:16.341441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.545 [2024-07-25 13:52:16.341656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.545 [2024-07-25 13:52:16.341703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.545 [2024-07-25 13:52:16.354869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.545 [2024-07-25 13:52:16.355121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.545 [2024-07-25 13:52:16.355164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.545 [2024-07-25 13:52:16.368601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.545 [2024-07-25 13:52:16.368840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.545 [2024-07-25 13:52:16.368883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.545 [2024-07-25 13:52:16.382445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.545 [2024-07-25 13:52:16.382713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.545 [2024-07-25 13:52:16.382756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.545 [2024-07-25 13:52:16.396175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.546 [2024-07-25 13:52:16.396404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.546 [2024-07-25 13:52:16.396430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.546 [2024-07-25 13:52:16.409777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.546 [2024-07-25 13:52:16.410095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.546 [2024-07-25 13:52:16.410122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.546 [2024-07-25 13:52:16.423392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd15f30) with pdu=0x2000190ddc00 00:23:19.546 [2024-07-25 13:52:16.423685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:19.546 [2024-07-25 13:52:16.423710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:19.546 00:23:19.546 Latency(us) 00:23:19.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.546 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:19.546 nvme0n1 : 2.01 19207.94 75.03 0.00 0.00 6648.17 2682.12 17087.91 00:23:19.546 =================================================================================================================== 00:23:19.546 Total : 19207.94 75.03 0.00 0.00 6648.17 2682.12 17087.91 00:23:19.546 0 00:23:19.546 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:19.546 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:19.546 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:19.546 | .driver_specific 00:23:19.546 | .nvme_error 00:23:19.546 | .status_code 00:23:19.546 | .command_transient_transport_error' 00:23:19.546 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 658189 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 658189 ']' 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 658189 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 658189 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 658189' 00:23:19.805 killing process with pid 658189 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 658189 00:23:19.805 Received shutdown signal, test time was about 2.000000 seconds 00:23:19.805 00:23:19.805 Latency(us) 00:23:19.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.805 =================================================================================================================== 00:23:19.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.805 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 658189 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=658601 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 658601 /var/tmp/bperf.sock 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 658601 ']' 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:20.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.104 13:52:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:20.104 [2024-07-25 13:52:17.025637] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:20.104 [2024-07-25 13:52:17.025712] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658601 ] 00:23:20.104 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:20.104 Zero copy mechanism will not be used. 00:23:20.104 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.104 [2024-07-25 13:52:17.083995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.363 [2024-07-25 13:52:17.189510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.363 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.363 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:20.363 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:20.363 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:20.621 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:20.621 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.621 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:20.621 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.621 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:20.621 13:52:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:21.188 nvme0n1 00:23:21.188 13:52:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:21.188 13:52:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.188 13:52:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:21.188 13:52:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.188 13:52:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:21.188 13:52:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:21.188 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:21.188 Zero copy mechanism will not be used. 00:23:21.188 Running I/O for 2 seconds... 00:23:21.188 [2024-07-25 13:52:18.152676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.153018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.153079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.158601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.158931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.158961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.165158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.165459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.165488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.171237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.171541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.171571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.176293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.176583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.176613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.181359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.181673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.181702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.186503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.186812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.186839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.191652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.191964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.191992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.196804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.197135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.197164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.201906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.202247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.202276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.188 [2024-07-25 13:52:18.206975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.188 [2024-07-25 13:52:18.207342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.188 [2024-07-25 13:52:18.207391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.189 [2024-07-25 13:52:18.212077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.189 [2024-07-25 13:52:18.212388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.189 [2024-07-25 13:52:18.212417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.189 [2024-07-25 13:52:18.217786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.189 [2024-07-25 13:52:18.218150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.189 [2024-07-25 13:52:18.218179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.448 [2024-07-25 13:52:18.224296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.448 [2024-07-25 13:52:18.224606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.448 [2024-07-25 13:52:18.224635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.448 [2024-07-25 13:52:18.231117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.448 [2024-07-25 13:52:18.231418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.448 [2024-07-25 13:52:18.231450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.448 [2024-07-25 13:52:18.238556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.448 [2024-07-25 13:52:18.238913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.448 [2024-07-25 13:52:18.238942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.448 [2024-07-25 13:52:18.245505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.448 [2024-07-25 13:52:18.245846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.448 [2024-07-25 13:52:18.245890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.253013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.253362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.253391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.260265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.260642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.260684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.268037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.268410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.268438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.274654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.274991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.275038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.281799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.282167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.282196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.289232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.289559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.289588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.296111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.296407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.296439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.303144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.303480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.303511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.310212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.310545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.310573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.317653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.318052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.318088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.324588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.324876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.324905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.331069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.331401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.331443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.336978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.337280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.337308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.343414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.343625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.343653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.349894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.350250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.356680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.357001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.357030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.364486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.364821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.364850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.370281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.370575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.370603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.375467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.375787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.375815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.381002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.381346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.381392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.386016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.386308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.386340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.390943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.391277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.391306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.395926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.396254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.396282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.401400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.401711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.401740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.407835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.408149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.408178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.413501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.449 [2024-07-25 13:52:18.413815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.449 [2024-07-25 13:52:18.413844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.449 [2024-07-25 13:52:18.420021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.420357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.420387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.425171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.425505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.425533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.430239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.430563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.430592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.435417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.435738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.435769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.440590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.440926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.440954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.445590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.445927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.445955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.450698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.451054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.451092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.455674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.456081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.456109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.460802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.461157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.461185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.465850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.466192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.466220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.470947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.471247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.471275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.475975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.476294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.476322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.450 [2024-07-25 13:52:18.481563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.450 [2024-07-25 13:52:18.481875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.450 [2024-07-25 13:52:18.481905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.710 [2024-07-25 13:52:18.486951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.710 [2024-07-25 13:52:18.487246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.710 [2024-07-25 13:52:18.487276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.710 [2024-07-25 13:52:18.491951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.710 [2024-07-25 13:52:18.492255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.710 [2024-07-25 13:52:18.492282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.710 [2024-07-25 13:52:18.496906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.710 [2024-07-25 13:52:18.497264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.710 [2024-07-25 13:52:18.497292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.710 [2024-07-25 13:52:18.501912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.710 [2024-07-25 13:52:18.502247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.502276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.506925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.507333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.507362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.512155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.512495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.512523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.517216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.517522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.517549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.522267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.522562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.522606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.527311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.527645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.527672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.532906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.533218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.533249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.539344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.539700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.539728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.546025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.546324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.546353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.552927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.553087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.553116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.560145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.560454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.560482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.567175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.567501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.567541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.574251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.574620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.574649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.581771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.582114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.582151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.589395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.589705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.589732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.596265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.596549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.596578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.602670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.602974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.603002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.608329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.608516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.608545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.614969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.615317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.615346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.622071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.622351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.622380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.629723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.630096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.630142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.636400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.636718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.636745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.641576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.641875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.641904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.646687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.647001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.711 [2024-07-25 13:52:18.647033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.711 [2024-07-25 13:52:18.651948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.711 [2024-07-25 13:52:18.652281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.652311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.657345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.657662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.657691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.662257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.662583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.662612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.668096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.668178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.668206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.674705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.675017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.675069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.681042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.681329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.681358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.688097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.688411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.688454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.694672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.694968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.694999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.701168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.701509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.701540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.707784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.708148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.708191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.714691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.715033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.715069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.721592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.721916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.721943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.728382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.728715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.728742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.735474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.735787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.735815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.712 [2024-07-25 13:52:18.741643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.712 [2024-07-25 13:52:18.741980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.712 [2024-07-25 13:52:18.742011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.973 [2024-07-25 13:52:18.747407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.973 [2024-07-25 13:52:18.747750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.973 [2024-07-25 13:52:18.747783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.973 [2024-07-25 13:52:18.752477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.973 [2024-07-25 13:52:18.752787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.973 [2024-07-25 13:52:18.752816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.973 [2024-07-25 13:52:18.757472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.973 [2024-07-25 13:52:18.757821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.973 [2024-07-25 13:52:18.757848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.763046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.763354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.763400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.769466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.769784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.769813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.776048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.776356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.776384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.782506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.782944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.782971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.787612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.787908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.787940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.792765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.793124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.793154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.797884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.798220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.798249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.803525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.803880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.803907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.809578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.809776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.809816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.815202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.815566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.815593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.821141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.821495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.821536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.826972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.827266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.827295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.832138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.832436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.832467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.837049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.837368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.837396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.842024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.842340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.842368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.847012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.847325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.847355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.853179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.853487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.853518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.859634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.859971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.860000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.866193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.866519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.866547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.872435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.872758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.872789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.878824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.879156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.879184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.885430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.885731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.885761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.891692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.892000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.892032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.974 [2024-07-25 13:52:18.898276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.974 [2024-07-25 13:52:18.898584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.974 [2024-07-25 13:52:18.898633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.904755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.905094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.905122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.911276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.911593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.911621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.917691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.917983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.918012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.923829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.924139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.924173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.930263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.930579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.930608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.936926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.937228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.937258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.944473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.944821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.944848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.951571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.951902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.951931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.958808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.959175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.959204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.965871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.966207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.966237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.972070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.972382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.972409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.978325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.978650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.978677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.984748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.985065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.985094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.991303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.991598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.991627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:18.997871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:18.998199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:18.998227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.975 [2024-07-25 13:52:19.005299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:21.975 [2024-07-25 13:52:19.005612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.975 [2024-07-25 13:52:19.005655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.237 [2024-07-25 13:52:19.012617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.012906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.012940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.018953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.019253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.019283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.025509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.025838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.025866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.030602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.030959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.030988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.035690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.035992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.036021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.041146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.041237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.041264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.046614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.046940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.046970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.051600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.051877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.051906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.057413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.057739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.057769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.063751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.064096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.064124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.070965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.071278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.071307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.077538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.077884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.077911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.082807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.083126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.083153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.088027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.088373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.088401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.093544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.093837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.093866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.098572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.098923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.098950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.103604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.103914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.103942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.110045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.110349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.110378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.116199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.116541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.116567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.122860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.123196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.123227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.129655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.129970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.130000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.135717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.136104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.136146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.141424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.141761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.141790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.147012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.147398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.238 [2024-07-25 13:52:19.147443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.238 [2024-07-25 13:52:19.152790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.238 [2024-07-25 13:52:19.153138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.153164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.158194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.158518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.158547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.163306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.163668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.163701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.168306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.168611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.168639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.173263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.173601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.173631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.178303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.178627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.178656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.183136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.183430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.183459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.188321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.188626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.188655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.194067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.194379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.194407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.199275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.199594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.199623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.204388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.204705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.204733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.209497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.209804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.209833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.214535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.214824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.214853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.219434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.219735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.219763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.224847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.225176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.225209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.230283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.230580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.230607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.235339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.235682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.235711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.240486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.240786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.240830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.245735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.246042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.246079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.250806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.251155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.251184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.256040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.256373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.256417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.261074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.261357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.261385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.239 [2024-07-25 13:52:19.266223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.239 [2024-07-25 13:52:19.266521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.239 [2024-07-25 13:52:19.266549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.271816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.272142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.272177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.277669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.277959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.277988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.284704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.284795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.284822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.290987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.291281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.291315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.296163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.296452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.296481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.301109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.301394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.301429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.306734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.307031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.307070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.313085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.313360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.313390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.317968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.318274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.318304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.323012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.323339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.323368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.328266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.328626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.328655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.334283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.334571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.334600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.340234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.340573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.340616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.346160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.346573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.501 [2024-07-25 13:52:19.346602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.501 [2024-07-25 13:52:19.351796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.501 [2024-07-25 13:52:19.352137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.352166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.357958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.358291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.358320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.363902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.364234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.364263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.368841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.369130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.369172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.373778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.374105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.374139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.378902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.379234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.379262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.383852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.384175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.384205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.389110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.389437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.389466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.394866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.395187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.395216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.400081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.400408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.400436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.404966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.405265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.405294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.410481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.410774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.410803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.416581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.416789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.416821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.423010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.423314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.423344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.429363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.429649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.429683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.435789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.436123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.436152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.442822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.443126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.443156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.449309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.449618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.449656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.454526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.454845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.454875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.460113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.460410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.460438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.465197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.465268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.465294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.471864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.472194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.472224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.478321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.478629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.478658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.484621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.484911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.502 [2024-07-25 13:52:19.484944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.502 [2024-07-25 13:52:19.491355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.502 [2024-07-25 13:52:19.491456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.503 [2024-07-25 13:52:19.491484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.503 [2024-07-25 13:52:19.498645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.503 [2024-07-25 13:52:19.498970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.503 [2024-07-25 13:52:19.499000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.503 [2024-07-25 13:52:19.506163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.503 [2024-07-25 13:52:19.506530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.503 [2024-07-25 13:52:19.506557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.503 [2024-07-25 13:52:19.512721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.503 [2024-07-25 13:52:19.513016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.503 [2024-07-25 13:52:19.513047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.503 [2024-07-25 13:52:19.518039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.503 [2024-07-25 13:52:19.518330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.503 [2024-07-25 13:52:19.518360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.503 [2024-07-25 13:52:19.524556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.503 [2024-07-25 13:52:19.524863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.503 [2024-07-25 13:52:19.524891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.503 [2024-07-25 13:52:19.529901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.503 [2024-07-25 13:52:19.530201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.503 [2024-07-25 13:52:19.530230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.503 [2024-07-25 13:52:19.534880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.503 [2024-07-25 13:52:19.535181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.503 [2024-07-25 13:52:19.535209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.539800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.540123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.540156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.544698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.544989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.545018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.549870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.550227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.550256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.555704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.556111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.556155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.561630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.562009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.562036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.567185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.567541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.567569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.573050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.573372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.573400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.578508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.578831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.578859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.583992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.584279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.584309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.589350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.589624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.589652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.595810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.596085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.596115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.601489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.601883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.601936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.607562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.607955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.607983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.613346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.613656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.613686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.618862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.619188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.619217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.624633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.624960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.624988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.630155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.630456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.630487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.635161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.635455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.635484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.639975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.640276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.640305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.645506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.645837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.645866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.650561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.650864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.650892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.655461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.655835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.655865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.660558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.660888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.660915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.765 [2024-07-25 13:52:19.665584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.765 [2024-07-25 13:52:19.665949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.765 [2024-07-25 13:52:19.665976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.670820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.671193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.671222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.675846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.676148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.676177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.680724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.681026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.681081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.685745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.686049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.686086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.690998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.691298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.691327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.697227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.697511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.697540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.703588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.703860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.703890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.711112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.711417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.711449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.717963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.718267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.718297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.724572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.724868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.724896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.730950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.731255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.731284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.736714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.737033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.737089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.742252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.742574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.742603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.747706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.748093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.748128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.753601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.753973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.754001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.759405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.759700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.759731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.764409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.764702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.764731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.769360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.769679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.769708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.774374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.774694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.774723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.779373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.779670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.779697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.784432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.784728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.784756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.789347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.789671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.789700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.766 [2024-07-25 13:52:19.794262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:22.766 [2024-07-25 13:52:19.794552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.766 [2024-07-25 13:52:19.794580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.799286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.799706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.799752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.804512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.804896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.804924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.809527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.809819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.809847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.814560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.814851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.814879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.819413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.819734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.819764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.824381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.824704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.824733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.829324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.829647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.829675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.834341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.834663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.834697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.839405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.839788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.839817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.844442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.844790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.844819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.849508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.849938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.849981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.854694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.855039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.855074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.859748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.860071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.860099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.864725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.028 [2024-07-25 13:52:19.865020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.028 [2024-07-25 13:52:19.865047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.028 [2024-07-25 13:52:19.869690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.869984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.870012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.874620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.874912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.874940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.879574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.879875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.879904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.884601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.884921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.884949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.889726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.890048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.890084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.894662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.895016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.895045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.899701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.899995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.900024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.904688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.904981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.905009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.909726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.910022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.910049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.914690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.914981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.915009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.919653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.919977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.920005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.924578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.924870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.924898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.929422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.929895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.929924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.934482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.934915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.934959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.939643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.939995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.940028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.944695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.944987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.945016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.949555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.949874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.949904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.954522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.954846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.954875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.959478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.959798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.959827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.964486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.964808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.964844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.969526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.969846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.969875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.974554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.974848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.974877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.979406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.979699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.979728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.984366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.984686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.029 [2024-07-25 13:52:19.984715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.029 [2024-07-25 13:52:19.989269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.029 [2024-07-25 13:52:19.989594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:19.989623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:19.994347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:19.994699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:19.994728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:19.999505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:19.999863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:19.999892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.004988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.005292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.005326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.010352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.010825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.010869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.015686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.016008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.016039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.021104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.021404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.021434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.026731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.027006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.027036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.032474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.032795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.032825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.038242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.038550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.038584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.044167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.044323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.044353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.050755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.051090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.051120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.030 [2024-07-25 13:52:20.057198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.030 [2024-07-25 13:52:20.057523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.030 [2024-07-25 13:52:20.057553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.063659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.063956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.063985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.069977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.070279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.070308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.076677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.076958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.076986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.083784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.084076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.084109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.090076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.090373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.090402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.096320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.096494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.096523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.102930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.103232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.103261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.109387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.109681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.109710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.115708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.116041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.116085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.122296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.122624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.122653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.129435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.129527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.129557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.137241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.137639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.137681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.144603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.144917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.144945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.289 [2024-07-25 13:52:20.150784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd16270) with pdu=0x2000190fef90 00:23:23.289 [2024-07-25 13:52:20.150856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.289 [2024-07-25 13:52:20.150882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.289 00:23:23.289 Latency(us) 00:23:23.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.289 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:23.289 nvme0n1 : 2.00 5356.00 669.50 0.00 0.00 2979.45 2293.76 7767.23 00:23:23.289 =================================================================================================================== 00:23:23.289 Total : 5356.00 669.50 0.00 0.00 2979.45 2293.76 7767.23 00:23:23.289 0 00:23:23.289 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:23.289 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:23.289 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:23.289 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:23.290 | .driver_specific 00:23:23.290 | .nvme_error 00:23:23.290 | .status_code 00:23:23.290 | .command_transient_transport_error' 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 346 > 0 )) 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 658601 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 658601 ']' 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 658601 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 658601 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 658601' 00:23:23.550 killing process with pid 658601 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 658601 00:23:23.550 Received shutdown signal, test time was about 2.000000 seconds 00:23:23.550 00:23:23.550 Latency(us) 00:23:23.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.550 =================================================================================================================== 00:23:23.550 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.550 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 658601 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 657231 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 657231 ']' 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 657231 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 657231 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 657231' 00:23:23.808 killing process with pid 657231 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 657231 00:23:23.808 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 657231 00:23:24.067 00:23:24.067 real 0m15.454s 00:23:24.067 user 0m30.301s 00:23:24.067 sys 0m4.387s 00:23:24.067 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:24.067 13:52:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:24.067 ************************************ 00:23:24.067 END TEST nvmf_digest_error 00:23:24.067 ************************************ 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.067 rmmod nvme_tcp 00:23:24.067 rmmod nvme_fabrics 00:23:24.067 rmmod nvme_keyring 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 657231 ']' 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 657231 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 657231 ']' 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 657231 00:23:24.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (657231) - No such process 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 657231 is not found' 00:23:24.067 Process with pid 657231 is not found 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.067 13:52:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.597 00:23:26.597 real 0m35.233s 00:23:26.597 user 1m1.196s 00:23:26.597 sys 0m10.311s 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:26.597 ************************************ 00:23:26.597 END TEST nvmf_digest 00:23:26.597 ************************************ 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.597 ************************************ 00:23:26.597 START TEST nvmf_bdevperf 00:23:26.597 ************************************ 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:26.597 * Looking for test storage... 00:23:26.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.597 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.598 13:52:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:28.500 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:28.500 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:28.500 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:28.500 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.500 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:28.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:23:28.501 00:23:28.501 --- 10.0.0.2 ping statistics --- 00:23:28.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.501 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:23:28.501 00:23:28.501 --- 10.0.0.1 ping statistics --- 00:23:28.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.501 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=661006 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 661006 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 661006 ']' 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.501 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:28.501 [2024-07-25 13:52:25.465924] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:28.501 [2024-07-25 13:52:25.466017] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.501 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.501 [2024-07-25 13:52:25.532415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:28.759 [2024-07-25 13:52:25.637002] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.759 [2024-07-25 13:52:25.637076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.759 [2024-07-25 13:52:25.637092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.759 [2024-07-25 13:52:25.637118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.759 [2024-07-25 13:52:25.637128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.759 [2024-07-25 13:52:25.637176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.759 [2024-07-25 13:52:25.637234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.759 [2024-07-25 13:52:25.637237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.759 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.759 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:23:28.759 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.759 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.759 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:28.759 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.759 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.759 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.759 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:28.759 [2024-07-25 13:52:25.785099] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:29.019 Malloc0 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:29.019 [2024-07-25 13:52:25.856977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.019 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.019 { 00:23:29.019 "params": { 00:23:29.019 "name": "Nvme$subsystem", 00:23:29.019 "trtype": "$TEST_TRANSPORT", 00:23:29.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.019 "adrfam": "ipv4", 00:23:29.019 "trsvcid": "$NVMF_PORT", 00:23:29.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.019 "hdgst": ${hdgst:-false}, 00:23:29.019 "ddgst": ${ddgst:-false} 00:23:29.019 }, 00:23:29.019 "method": "bdev_nvme_attach_controller" 00:23:29.019 } 00:23:29.020 EOF 00:23:29.020 )") 00:23:29.020 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:29.020 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:29.020 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:29.020 13:52:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:29.020 "params": { 00:23:29.020 "name": "Nvme1", 00:23:29.020 "trtype": "tcp", 00:23:29.020 "traddr": "10.0.0.2", 00:23:29.020 "adrfam": "ipv4", 00:23:29.020 "trsvcid": "4420", 00:23:29.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.020 "hdgst": false, 00:23:29.020 "ddgst": false 00:23:29.020 }, 00:23:29.020 "method": "bdev_nvme_attach_controller" 00:23:29.020 }' 00:23:29.020 [2024-07-25 13:52:25.905502] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:29.020 [2024-07-25 13:52:25.905567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661114 ] 00:23:29.020 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.020 [2024-07-25 13:52:25.964640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.280 [2024-07-25 13:52:26.077014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.280 Running I/O for 1 seconds... 00:23:30.655 00:23:30.655 Latency(us) 00:23:30.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.655 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:30.655 Verification LBA range: start 0x0 length 0x4000 00:23:30.655 Nvme1n1 : 1.01 8750.09 34.18 0.00 0.00 14564.15 3325.35 15146.10 00:23:30.655 =================================================================================================================== 00:23:30.655 Total : 8750.09 34.18 0.00 0.00 14564.15 3325.35 15146.10 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=661258 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.655 { 00:23:30.655 "params": { 00:23:30.655 "name": "Nvme$subsystem", 00:23:30.655 "trtype": "$TEST_TRANSPORT", 00:23:30.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.655 "adrfam": "ipv4", 00:23:30.655 "trsvcid": "$NVMF_PORT", 00:23:30.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.655 "hdgst": ${hdgst:-false}, 00:23:30.655 "ddgst": ${ddgst:-false} 00:23:30.655 }, 00:23:30.655 "method": "bdev_nvme_attach_controller" 00:23:30.655 } 00:23:30.655 EOF 00:23:30.655 )") 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:30.655 13:52:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:30.655 "params": { 00:23:30.655 "name": "Nvme1", 00:23:30.655 "trtype": "tcp", 00:23:30.655 "traddr": "10.0.0.2", 00:23:30.655 "adrfam": "ipv4", 00:23:30.655 "trsvcid": "4420", 00:23:30.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.655 "hdgst": false, 00:23:30.655 "ddgst": false 00:23:30.655 }, 00:23:30.655 "method": "bdev_nvme_attach_controller" 00:23:30.655 }' 00:23:30.655 [2024-07-25 13:52:27.613315] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:30.655 [2024-07-25 13:52:27.613428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661258 ] 00:23:30.655 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.655 [2024-07-25 13:52:27.672280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.915 [2024-07-25 13:52:27.780601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.175 Running I/O for 15 seconds... 00:23:33.709 13:52:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 661006 00:23:33.709 13:52:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:33.709 [2024-07-25 13:52:30.581827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.581885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.581916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.581933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.581950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.581964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.581982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.581996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.710 [2024-07-25 13:52:30.582929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.710 [2024-07-25 13:52:30.582956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.710 [2024-07-25 13:52:30.582970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.710 [2024-07-25 13:52:30.582982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.582996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.711 [2024-07-25 13:52:30.583008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.711 [2024-07-25 13:52:30.583034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.711 [2024-07-25 13:52:30.583092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.711 [2024-07-25 13:52:30.583122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.711 [2024-07-25 13:52:30.583152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.711 [2024-07-25 13:52:30.583181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.711 [2024-07-25 13:52:30.583944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.711 [2024-07-25 13:52:30.583956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.583970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.583982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.583996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.712 [2024-07-25 13:52:30.584741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.712 [2024-07-25 13:52:30.584766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.712 [2024-07-25 13:52:30.584792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.712 [2024-07-25 13:52:30.584822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.712 [2024-07-25 13:52:30.584848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.712 [2024-07-25 13:52:30.584873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.712 [2024-07-25 13:52:30.584902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.712 [2024-07-25 13:52:30.584928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.712 [2024-07-25 13:52:30.584941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.713 [2024-07-25 13:52:30.584953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.584967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.584979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.584992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.713 [2024-07-25 13:52:30.585646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd3830 is same with the state(5) to be set 00:23:33.713 [2024-07-25 13:52:30.585674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.713 [2024-07-25 13:52:30.585684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.713 [2024-07-25 13:52:30.585694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32440 len:8 PRP1 0x0 PRP2 0x0 00:23:33.713 [2024-07-25 13:52:30.585706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.713 [2024-07-25 13:52:30.585762] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dd3830 was disconnected and freed. reset controller. 00:23:33.713 [2024-07-25 13:52:30.588931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.713 [2024-07-25 13:52:30.589007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.713 [2024-07-25 13:52:30.589776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.713 [2024-07-25 13:52:30.589805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.713 [2024-07-25 13:52:30.589822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.713 [2024-07-25 13:52:30.590077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.713 [2024-07-25 13:52:30.590312] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.713 [2024-07-25 13:52:30.590333] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.713 [2024-07-25 13:52:30.590351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.713 [2024-07-25 13:52:30.593422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.713 [2024-07-25 13:52:30.602374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.713 [2024-07-25 13:52:30.602724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.713 [2024-07-25 13:52:30.602752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.713 [2024-07-25 13:52:30.602768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.713 [2024-07-25 13:52:30.603004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.713 [2024-07-25 13:52:30.603250] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.713 [2024-07-25 13:52:30.603273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.713 [2024-07-25 13:52:30.603287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.714 [2024-07-25 13:52:30.606181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.714 [2024-07-25 13:52:30.615489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.714 [2024-07-25 13:52:30.615900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.714 [2024-07-25 13:52:30.615928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.714 [2024-07-25 13:52:30.615943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.714 [2024-07-25 13:52:30.616202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.714 [2024-07-25 13:52:30.616449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.714 [2024-07-25 13:52:30.616469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.714 [2024-07-25 13:52:30.616481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.714 [2024-07-25 13:52:30.619458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.714 [2024-07-25 13:52:30.628531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.714 [2024-07-25 13:52:30.628944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.714 [2024-07-25 13:52:30.628971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.714 [2024-07-25 13:52:30.628986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.714 [2024-07-25 13:52:30.629252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.714 [2024-07-25 13:52:30.629483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.714 [2024-07-25 13:52:30.629502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.714 [2024-07-25 13:52:30.629514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.714 [2024-07-25 13:52:30.632396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.714 [2024-07-25 13:52:30.641599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.714 [2024-07-25 13:52:30.641977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.714 [2024-07-25 13:52:30.642004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.714 [2024-07-25 13:52:30.642019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.714 [2024-07-25 13:52:30.642270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.714 [2024-07-25 13:52:30.642493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.714 [2024-07-25 13:52:30.642513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.714 [2024-07-25 13:52:30.642525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.714 [2024-07-25 13:52:30.645391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.714 [2024-07-25 13:52:30.654706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.714 [2024-07-25 13:52:30.655069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.714 [2024-07-25 13:52:30.655095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.714 [2024-07-25 13:52:30.655111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.714 [2024-07-25 13:52:30.655325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.714 [2024-07-25 13:52:30.655561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.714 [2024-07-25 13:52:30.655581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.714 [2024-07-25 13:52:30.655598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.714 [2024-07-25 13:52:30.658608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.714 [2024-07-25 13:52:30.667806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.714 [2024-07-25 13:52:30.668215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.714 [2024-07-25 13:52:30.668243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.714 [2024-07-25 13:52:30.668259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.714 [2024-07-25 13:52:30.668496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.714 [2024-07-25 13:52:30.668699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.714 [2024-07-25 13:52:30.668719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.714 [2024-07-25 13:52:30.668731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.714 [2024-07-25 13:52:30.671643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.714 [2024-07-25 13:52:30.680857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.714 [2024-07-25 13:52:30.681271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.714 [2024-07-25 13:52:30.681297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.714 [2024-07-25 13:52:30.681312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.714 [2024-07-25 13:52:30.681545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.714 [2024-07-25 13:52:30.681749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.714 [2024-07-25 13:52:30.681768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.714 [2024-07-25 13:52:30.681780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.714 [2024-07-25 13:52:30.684673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.714 [2024-07-25 13:52:30.693870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.714 [2024-07-25 13:52:30.694286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.714 [2024-07-25 13:52:30.694314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.714 [2024-07-25 13:52:30.694329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.714 [2024-07-25 13:52:30.694563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.714 [2024-07-25 13:52:30.694766] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.714 [2024-07-25 13:52:30.694786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.714 [2024-07-25 13:52:30.694798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.714 [2024-07-25 13:52:30.697705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.714 [2024-07-25 13:52:30.706999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.714 [2024-07-25 13:52:30.707376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.714 [2024-07-25 13:52:30.707423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.714 [2024-07-25 13:52:30.707440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.714 [2024-07-25 13:52:30.707673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.714 [2024-07-25 13:52:30.707879] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.714 [2024-07-25 13:52:30.707898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.714 [2024-07-25 13:52:30.707910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.714 [2024-07-25 13:52:30.711004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.715 [2024-07-25 13:52:30.720438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.715 [2024-07-25 13:52:30.720856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.715 [2024-07-25 13:52:30.720885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.715 [2024-07-25 13:52:30.720901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.715 [2024-07-25 13:52:30.721139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.715 [2024-07-25 13:52:30.721368] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.715 [2024-07-25 13:52:30.721389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.715 [2024-07-25 13:52:30.721418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.715 [2024-07-25 13:52:30.724452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.715 [2024-07-25 13:52:30.733732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.715 [2024-07-25 13:52:30.734218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.715 [2024-07-25 13:52:30.734247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.715 [2024-07-25 13:52:30.734263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.715 [2024-07-25 13:52:30.734504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.715 [2024-07-25 13:52:30.734713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.715 [2024-07-25 13:52:30.734732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.715 [2024-07-25 13:52:30.734745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.715 [2024-07-25 13:52:30.737734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.973 [2024-07-25 13:52:30.747088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.973 [2024-07-25 13:52:30.747410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.973 [2024-07-25 13:52:30.747437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.973 [2024-07-25 13:52:30.747452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.973 [2024-07-25 13:52:30.747675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.973 [2024-07-25 13:52:30.747889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.973 [2024-07-25 13:52:30.747909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.973 [2024-07-25 13:52:30.747922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.973 [2024-07-25 13:52:30.750937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.973 [2024-07-25 13:52:30.760517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.973 [2024-07-25 13:52:30.760895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.973 [2024-07-25 13:52:30.760922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.973 [2024-07-25 13:52:30.760938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.973 [2024-07-25 13:52:30.761171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.973 [2024-07-25 13:52:30.761400] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.973 [2024-07-25 13:52:30.761420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.973 [2024-07-25 13:52:30.761432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.973 [2024-07-25 13:52:30.764442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.973 [2024-07-25 13:52:30.773837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.973 [2024-07-25 13:52:30.774156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.973 [2024-07-25 13:52:30.774185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.973 [2024-07-25 13:52:30.774202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.973 [2024-07-25 13:52:30.774445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.973 [2024-07-25 13:52:30.774654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.973 [2024-07-25 13:52:30.774674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.973 [2024-07-25 13:52:30.774686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.973 [2024-07-25 13:52:30.777754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.973 [2024-07-25 13:52:30.787246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.973 [2024-07-25 13:52:30.787698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.973 [2024-07-25 13:52:30.787725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.973 [2024-07-25 13:52:30.787741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.973 [2024-07-25 13:52:30.787982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.973 [2024-07-25 13:52:30.788229] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.973 [2024-07-25 13:52:30.788252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.973 [2024-07-25 13:52:30.788266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.973 [2024-07-25 13:52:30.791319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.973 [2024-07-25 13:52:30.800714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.973 [2024-07-25 13:52:30.801073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.973 [2024-07-25 13:52:30.801101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.973 [2024-07-25 13:52:30.801118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.973 [2024-07-25 13:52:30.801333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.973 [2024-07-25 13:52:30.801559] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.973 [2024-07-25 13:52:30.801578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.973 [2024-07-25 13:52:30.801591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.973 [2024-07-25 13:52:30.804589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.973 [2024-07-25 13:52:30.814057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.973 [2024-07-25 13:52:30.814427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.973 [2024-07-25 13:52:30.814455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.973 [2024-07-25 13:52:30.814472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.973 [2024-07-25 13:52:30.814713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.973 [2024-07-25 13:52:30.814923] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.973 [2024-07-25 13:52:30.814942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.973 [2024-07-25 13:52:30.814955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.973 [2024-07-25 13:52:30.817994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.974 [2024-07-25 13:52:30.827408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.974 [2024-07-25 13:52:30.827792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.974 [2024-07-25 13:52:30.827820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.974 [2024-07-25 13:52:30.827836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.974 [2024-07-25 13:52:30.828056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.974 [2024-07-25 13:52:30.828281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.974 [2024-07-25 13:52:30.828301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.974 [2024-07-25 13:52:30.828313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.974 [2024-07-25 13:52:30.831324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.974 [2024-07-25 13:52:30.840636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.974 [2024-07-25 13:52:30.840979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.974 [2024-07-25 13:52:30.841008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.974 [2024-07-25 13:52:30.841028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.974 [2024-07-25 13:52:30.841271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.974 [2024-07-25 13:52:30.841509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.974 [2024-07-25 13:52:30.841530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.974 [2024-07-25 13:52:30.841543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.974 [2024-07-25 13:52:30.845106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.974 [2024-07-25 13:52:30.854570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.974 [2024-07-25 13:52:30.855016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.974 [2024-07-25 13:52:30.855044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.974 [2024-07-25 13:52:30.855068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.974 [2024-07-25 13:52:30.855315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.974 [2024-07-25 13:52:30.855538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.974 [2024-07-25 13:52:30.855557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.974 [2024-07-25 13:52:30.855570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.974 [2024-07-25 13:52:30.858724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.974 [2024-07-25 13:52:30.867962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.974 [2024-07-25 13:52:30.868409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.974 [2024-07-25 13:52:30.868438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.974 [2024-07-25 13:52:30.868454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.974 [2024-07-25 13:52:30.868689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.974 [2024-07-25 13:52:30.868892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.974 [2024-07-25 13:52:30.868911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.974 [2024-07-25 13:52:30.868923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.974 [2024-07-25 13:52:30.871887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.974 [2024-07-25 13:52:30.881311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.974 [2024-07-25 13:52:30.881750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.974 [2024-07-25 13:52:30.881777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.974 [2024-07-25 13:52:30.881792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.974 [2024-07-25 13:52:30.882028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.974 [2024-07-25 13:52:30.882255] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.974 [2024-07-25 13:52:30.882281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.974 [2024-07-25 13:52:30.882295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.974 [2024-07-25 13:52:30.885238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.974 [2024-07-25 13:52:30.894548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.974 [2024-07-25 13:52:30.895014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.974 [2024-07-25 13:52:30.895042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.974 [2024-07-25 13:52:30.895065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.974 [2024-07-25 13:52:30.895297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.974 [2024-07-25 13:52:30.895519] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.974 [2024-07-25 13:52:30.895539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.974 [2024-07-25 13:52:30.895551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.974 [2024-07-25 13:52:30.898443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.974 [2024-07-25 13:52:30.907623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.974 [2024-07-25 13:52:30.907972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.974 [2024-07-25 13:52:30.907999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.974 [2024-07-25 13:52:30.908014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.974 [2024-07-25 13:52:30.908294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.974 [2024-07-25 13:52:30.908502] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.974 [2024-07-25 13:52:30.908521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.974 [2024-07-25 13:52:30.908533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.974 [2024-07-25 13:52:30.911401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.974 [2024-07-25 13:52:30.920644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.974 [2024-07-25 13:52:30.920985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.974 [2024-07-25 13:52:30.921012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.974 [2024-07-25 13:52:30.921026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.974 [2024-07-25 13:52:30.921271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.974 [2024-07-25 13:52:30.921495] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.974 [2024-07-25 13:52:30.921514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.974 [2024-07-25 13:52:30.921526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.974 [2024-07-25 13:52:30.924421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.974 [2024-07-25 13:52:30.933699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.974 [2024-07-25 13:52:30.934010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.974 [2024-07-25 13:52:30.934037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.974 [2024-07-25 13:52:30.934053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.974 [2024-07-25 13:52:30.934318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.975 [2024-07-25 13:52:30.934524] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.975 [2024-07-25 13:52:30.934543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.975 [2024-07-25 13:52:30.934555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.975 [2024-07-25 13:52:30.937421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.975 [2024-07-25 13:52:30.946763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.975 [2024-07-25 13:52:30.947106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.975 [2024-07-25 13:52:30.947133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.975 [2024-07-25 13:52:30.947148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.975 [2024-07-25 13:52:30.947363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.975 [2024-07-25 13:52:30.947566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.975 [2024-07-25 13:52:30.947585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.975 [2024-07-25 13:52:30.947597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.975 [2024-07-25 13:52:30.950493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.975 [2024-07-25 13:52:30.959941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.975 [2024-07-25 13:52:30.960279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.975 [2024-07-25 13:52:30.960307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.975 [2024-07-25 13:52:30.960322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.975 [2024-07-25 13:52:30.960556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.975 [2024-07-25 13:52:30.960760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.975 [2024-07-25 13:52:30.960779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.975 [2024-07-25 13:52:30.960792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.975 [2024-07-25 13:52:30.963658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.975 [2024-07-25 13:52:30.972929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.975 [2024-07-25 13:52:30.973278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.975 [2024-07-25 13:52:30.973305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.975 [2024-07-25 13:52:30.973320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.975 [2024-07-25 13:52:30.973558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.975 [2024-07-25 13:52:30.973761] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.975 [2024-07-25 13:52:30.973780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.975 [2024-07-25 13:52:30.973793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.975 [2024-07-25 13:52:30.976687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.975 [2024-07-25 13:52:30.986026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.975 [2024-07-25 13:52:30.986376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.975 [2024-07-25 13:52:30.986403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.975 [2024-07-25 13:52:30.986419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.975 [2024-07-25 13:52:30.986653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.975 [2024-07-25 13:52:30.986856] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.975 [2024-07-25 13:52:30.986875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.975 [2024-07-25 13:52:30.986887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.975 [2024-07-25 13:52:30.989794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:33.975 [2024-07-25 13:52:30.999011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.975 [2024-07-25 13:52:30.999406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:33.975 [2024-07-25 13:52:30.999434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:33.975 [2024-07-25 13:52:30.999448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:33.975 [2024-07-25 13:52:30.999665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:33.975 [2024-07-25 13:52:30.999869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.975 [2024-07-25 13:52:30.999888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:33.975 [2024-07-25 13:52:30.999901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.975 [2024-07-25 13:52:31.002799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.234 [2024-07-25 13:52:31.012046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.234 [2024-07-25 13:52:31.012560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.234 [2024-07-25 13:52:31.012589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.234 [2024-07-25 13:52:31.012605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.234 [2024-07-25 13:52:31.012848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.234 [2024-07-25 13:52:31.013091] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.234 [2024-07-25 13:52:31.013111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.234 [2024-07-25 13:52:31.013128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.234 [2024-07-25 13:52:31.016097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.234 [2024-07-25 13:52:31.025145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.234 [2024-07-25 13:52:31.025554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.234 [2024-07-25 13:52:31.025581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.234 [2024-07-25 13:52:31.025596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.234 [2024-07-25 13:52:31.025830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.234 [2024-07-25 13:52:31.026034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.234 [2024-07-25 13:52:31.026053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.234 [2024-07-25 13:52:31.026089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.234 [2024-07-25 13:52:31.028961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.234 [2024-07-25 13:52:31.038195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.234 [2024-07-25 13:52:31.038543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.234 [2024-07-25 13:52:31.038570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.234 [2024-07-25 13:52:31.038586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.234 [2024-07-25 13:52:31.038823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.234 [2024-07-25 13:52:31.039027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.234 [2024-07-25 13:52:31.039069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.234 [2024-07-25 13:52:31.039084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.234 [2024-07-25 13:52:31.041891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.234 [2024-07-25 13:52:31.051396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.234 [2024-07-25 13:52:31.051740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.234 [2024-07-25 13:52:31.051767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.234 [2024-07-25 13:52:31.051783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.234 [2024-07-25 13:52:31.052017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.234 [2024-07-25 13:52:31.052239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.234 [2024-07-25 13:52:31.052260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.234 [2024-07-25 13:52:31.052273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.234 [2024-07-25 13:52:31.055154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.234 [2024-07-25 13:52:31.064360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.234 [2024-07-25 13:52:31.064769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.234 [2024-07-25 13:52:31.064797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.234 [2024-07-25 13:52:31.064812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.234 [2024-07-25 13:52:31.065046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.234 [2024-07-25 13:52:31.065249] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.234 [2024-07-25 13:52:31.065269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.234 [2024-07-25 13:52:31.065281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.234 [2024-07-25 13:52:31.068043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.234 [2024-07-25 13:52:31.077319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.234 [2024-07-25 13:52:31.077664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.234 [2024-07-25 13:52:31.077691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.234 [2024-07-25 13:52:31.077706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.234 [2024-07-25 13:52:31.077942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.234 [2024-07-25 13:52:31.078189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.234 [2024-07-25 13:52:31.078210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.234 [2024-07-25 13:52:31.078224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.234 [2024-07-25 13:52:31.081096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.234 [2024-07-25 13:52:31.090457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.234 [2024-07-25 13:52:31.090825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.234 [2024-07-25 13:52:31.090851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.234 [2024-07-25 13:52:31.090866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.091111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.091317] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.235 [2024-07-25 13:52:31.091337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.235 [2024-07-25 13:52:31.091351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.235 [2024-07-25 13:52:31.094883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.235 [2024-07-25 13:52:31.103683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.235 [2024-07-25 13:52:31.104094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.235 [2024-07-25 13:52:31.104151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.235 [2024-07-25 13:52:31.104167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.104407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.104610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.235 [2024-07-25 13:52:31.104629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.235 [2024-07-25 13:52:31.104641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.235 [2024-07-25 13:52:31.107585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.235 [2024-07-25 13:52:31.117326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.235 [2024-07-25 13:52:31.117776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.235 [2024-07-25 13:52:31.117803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.235 [2024-07-25 13:52:31.117819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.118054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.118294] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.235 [2024-07-25 13:52:31.118316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.235 [2024-07-25 13:52:31.118330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.235 [2024-07-25 13:52:31.121681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.235 [2024-07-25 13:52:31.130927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.235 [2024-07-25 13:52:31.131249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.235 [2024-07-25 13:52:31.131279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.235 [2024-07-25 13:52:31.131295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.131524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.131778] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.235 [2024-07-25 13:52:31.131799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.235 [2024-07-25 13:52:31.131812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.235 [2024-07-25 13:52:31.135012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.235 [2024-07-25 13:52:31.144624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.235 [2024-07-25 13:52:31.144987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.235 [2024-07-25 13:52:31.145015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.235 [2024-07-25 13:52:31.145030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.145255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.145512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.235 [2024-07-25 13:52:31.145532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.235 [2024-07-25 13:52:31.145549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.235 [2024-07-25 13:52:31.148797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.235 [2024-07-25 13:52:31.158160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.235 [2024-07-25 13:52:31.158549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.235 [2024-07-25 13:52:31.158576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.235 [2024-07-25 13:52:31.158592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.158838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.159079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.235 [2024-07-25 13:52:31.159101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.235 [2024-07-25 13:52:31.159116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.235 [2024-07-25 13:52:31.162304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.235 [2024-07-25 13:52:31.171788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.235 [2024-07-25 13:52:31.172187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.235 [2024-07-25 13:52:31.172216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.235 [2024-07-25 13:52:31.172232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.172461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.172707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.235 [2024-07-25 13:52:31.172727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.235 [2024-07-25 13:52:31.172740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.235 [2024-07-25 13:52:31.176009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.235 [2024-07-25 13:52:31.185427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.235 [2024-07-25 13:52:31.185831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.235 [2024-07-25 13:52:31.185858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.235 [2024-07-25 13:52:31.185873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.186107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.186326] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.235 [2024-07-25 13:52:31.186363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.235 [2024-07-25 13:52:31.186378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.235 [2024-07-25 13:52:31.189627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.235 [2024-07-25 13:52:31.198982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.235 [2024-07-25 13:52:31.199311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.235 [2024-07-25 13:52:31.199347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.235 [2024-07-25 13:52:31.199364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.199593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.199837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.235 [2024-07-25 13:52:31.199857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.235 [2024-07-25 13:52:31.199870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.235 [2024-07-25 13:52:31.203091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.235 [2024-07-25 13:52:31.212349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.235 [2024-07-25 13:52:31.212710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.235 [2024-07-25 13:52:31.212737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.235 [2024-07-25 13:52:31.212753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.235 [2024-07-25 13:52:31.212987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.235 [2024-07-25 13:52:31.213236] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.236 [2024-07-25 13:52:31.213259] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.236 [2024-07-25 13:52:31.213273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.236 [2024-07-25 13:52:31.216251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.236 [2024-07-25 13:52:31.225563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.236 [2024-07-25 13:52:31.225907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.236 [2024-07-25 13:52:31.225935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.236 [2024-07-25 13:52:31.225950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.236 [2024-07-25 13:52:31.226201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.236 [2024-07-25 13:52:31.226452] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.236 [2024-07-25 13:52:31.226471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.236 [2024-07-25 13:52:31.226483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.236 [2024-07-25 13:52:31.229400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.236 [2024-07-25 13:52:31.238608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.236 [2024-07-25 13:52:31.238982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.236 [2024-07-25 13:52:31.239009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.236 [2024-07-25 13:52:31.239024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.236 [2024-07-25 13:52:31.239300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.236 [2024-07-25 13:52:31.239516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.236 [2024-07-25 13:52:31.239535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.236 [2024-07-25 13:52:31.239547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.236 [2024-07-25 13:52:31.242414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.236 [2024-07-25 13:52:31.251615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.236 [2024-07-25 13:52:31.251936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.236 [2024-07-25 13:52:31.251964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.236 [2024-07-25 13:52:31.251979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.236 [2024-07-25 13:52:31.252242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.236 [2024-07-25 13:52:31.252457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.236 [2024-07-25 13:52:31.252478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.236 [2024-07-25 13:52:31.252491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.236 [2024-07-25 13:52:31.255390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.236 [2024-07-25 13:52:31.264614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.236 [2024-07-25 13:52:31.264999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.236 [2024-07-25 13:52:31.265046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.236 [2024-07-25 13:52:31.265071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.236 [2024-07-25 13:52:31.265327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.236 [2024-07-25 13:52:31.265569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.236 [2024-07-25 13:52:31.265589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.236 [2024-07-25 13:52:31.265602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.496 [2024-07-25 13:52:31.268750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.496 [2024-07-25 13:52:31.277732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.496 [2024-07-25 13:52:31.278040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.496 [2024-07-25 13:52:31.278087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.497 [2024-07-25 13:52:31.278120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.497 [2024-07-25 13:52:31.278356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.497 [2024-07-25 13:52:31.278559] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.497 [2024-07-25 13:52:31.278579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.497 [2024-07-25 13:52:31.278591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.497 [2024-07-25 13:52:31.281490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.497 [2024-07-25 13:52:31.290827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.497 [2024-07-25 13:52:31.291137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.497 [2024-07-25 13:52:31.291165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.497 [2024-07-25 13:52:31.291181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.497 [2024-07-25 13:52:31.291397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.497 [2024-07-25 13:52:31.291602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.497 [2024-07-25 13:52:31.291621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.497 [2024-07-25 13:52:31.291633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.497 [2024-07-25 13:52:31.294509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.497 [2024-07-25 13:52:31.303877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.497 [2024-07-25 13:52:31.304288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.497 [2024-07-25 13:52:31.304315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.497 [2024-07-25 13:52:31.304331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.497 [2024-07-25 13:52:31.304564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.497 [2024-07-25 13:52:31.304767] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.497 [2024-07-25 13:52:31.304786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.497 [2024-07-25 13:52:31.304798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.497 [2024-07-25 13:52:31.307696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.497 [2024-07-25 13:52:31.316931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.497 [2024-07-25 13:52:31.317285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.497 [2024-07-25 13:52:31.317313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.497 [2024-07-25 13:52:31.317329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.497 [2024-07-25 13:52:31.317565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.497 [2024-07-25 13:52:31.317769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.497 [2024-07-25 13:52:31.317788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.497 [2024-07-25 13:52:31.317800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.497 [2024-07-25 13:52:31.320694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.497 [2024-07-25 13:52:31.330126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.497 [2024-07-25 13:52:31.330436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.497 [2024-07-25 13:52:31.330464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.497 [2024-07-25 13:52:31.330484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.497 [2024-07-25 13:52:31.330700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.497 [2024-07-25 13:52:31.330903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.497 [2024-07-25 13:52:31.330922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.497 [2024-07-25 13:52:31.330934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.497 [2024-07-25 13:52:31.333822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.497 [2024-07-25 13:52:31.343211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.497 [2024-07-25 13:52:31.343596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.497 [2024-07-25 13:52:31.343624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.497 [2024-07-25 13:52:31.343640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.497 [2024-07-25 13:52:31.343896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.497 [2024-07-25 13:52:31.344155] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.497 [2024-07-25 13:52:31.344177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.497 [2024-07-25 13:52:31.344190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.497 [2024-07-25 13:52:31.347704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.497 [2024-07-25 13:52:31.356417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.497 [2024-07-25 13:52:31.356824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.497 [2024-07-25 13:52:31.356852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.497 [2024-07-25 13:52:31.356867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.497 [2024-07-25 13:52:31.357116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.497 [2024-07-25 13:52:31.357352] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.497 [2024-07-25 13:52:31.357372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.497 [2024-07-25 13:52:31.357385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.497 [2024-07-25 13:52:31.360382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.497 [2024-07-25 13:52:31.369628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.497 [2024-07-25 13:52:31.370045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.497 [2024-07-25 13:52:31.370098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.497 [2024-07-25 13:52:31.370115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.497 [2024-07-25 13:52:31.370370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.497 [2024-07-25 13:52:31.370559] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.497 [2024-07-25 13:52:31.370581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.497 [2024-07-25 13:52:31.370595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.497 [2024-07-25 13:52:31.373432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.497 [2024-07-25 13:52:31.382699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.497 [2024-07-25 13:52:31.383039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.497 [2024-07-25 13:52:31.383087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.497 [2024-07-25 13:52:31.383105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.497 [2024-07-25 13:52:31.383340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.497 [2024-07-25 13:52:31.383544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.497 [2024-07-25 13:52:31.383563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.497 [2024-07-25 13:52:31.383576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.498 [2024-07-25 13:52:31.386469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.498 [2024-07-25 13:52:31.395886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.498 [2024-07-25 13:52:31.396315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.498 [2024-07-25 13:52:31.396343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.498 [2024-07-25 13:52:31.396358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.498 [2024-07-25 13:52:31.396603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.498 [2024-07-25 13:52:31.396806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.498 [2024-07-25 13:52:31.396825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.498 [2024-07-25 13:52:31.396837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.498 [2024-07-25 13:52:31.399806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.498 [2024-07-25 13:52:31.408934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.498 [2024-07-25 13:52:31.409366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.498 [2024-07-25 13:52:31.409393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.498 [2024-07-25 13:52:31.409408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.498 [2024-07-25 13:52:31.409626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.498 [2024-07-25 13:52:31.409829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.498 [2024-07-25 13:52:31.409847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.498 [2024-07-25 13:52:31.409860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.498 [2024-07-25 13:52:31.412649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.498 [2024-07-25 13:52:31.422015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.498 [2024-07-25 13:52:31.422397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.498 [2024-07-25 13:52:31.422440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.498 [2024-07-25 13:52:31.422456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.498 [2024-07-25 13:52:31.422693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.498 [2024-07-25 13:52:31.422895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.498 [2024-07-25 13:52:31.422915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.498 [2024-07-25 13:52:31.422927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.498 [2024-07-25 13:52:31.425852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.498 [2024-07-25 13:52:31.435082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.498 [2024-07-25 13:52:31.435436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.498 [2024-07-25 13:52:31.435463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.498 [2024-07-25 13:52:31.435478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.498 [2024-07-25 13:52:31.435711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.498 [2024-07-25 13:52:31.435914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.498 [2024-07-25 13:52:31.435934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.498 [2024-07-25 13:52:31.435947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.498 [2024-07-25 13:52:31.438850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.498 [2024-07-25 13:52:31.448109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.498 [2024-07-25 13:52:31.448420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.498 [2024-07-25 13:52:31.448447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.498 [2024-07-25 13:52:31.448463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.498 [2024-07-25 13:52:31.448679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.498 [2024-07-25 13:52:31.448882] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.498 [2024-07-25 13:52:31.448902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.498 [2024-07-25 13:52:31.448915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.498 [2024-07-25 13:52:31.451814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.498 [2024-07-25 13:52:31.461209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.498 [2024-07-25 13:52:31.461615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.498 [2024-07-25 13:52:31.461643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.498 [2024-07-25 13:52:31.461658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.498 [2024-07-25 13:52:31.461897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.498 [2024-07-25 13:52:31.462130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.498 [2024-07-25 13:52:31.462165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.498 [2024-07-25 13:52:31.462179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.498 [2024-07-25 13:52:31.465033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.498 [2024-07-25 13:52:31.474201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.498 [2024-07-25 13:52:31.474608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.498 [2024-07-25 13:52:31.474636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.498 [2024-07-25 13:52:31.474651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.498 [2024-07-25 13:52:31.474887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.498 [2024-07-25 13:52:31.475120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.498 [2024-07-25 13:52:31.475155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.498 [2024-07-25 13:52:31.475168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.498 [2024-07-25 13:52:31.478022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.498 [2024-07-25 13:52:31.487294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.498 [2024-07-25 13:52:31.487697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.498 [2024-07-25 13:52:31.487725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.498 [2024-07-25 13:52:31.487741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.498 [2024-07-25 13:52:31.487976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.498 [2024-07-25 13:52:31.488211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.498 [2024-07-25 13:52:31.488233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.498 [2024-07-25 13:52:31.488246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.498 [2024-07-25 13:52:31.491115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.498 [2024-07-25 13:52:31.500454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.498 [2024-07-25 13:52:31.500799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.498 [2024-07-25 13:52:31.500826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.498 [2024-07-25 13:52:31.500841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.498 [2024-07-25 13:52:31.501080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.498 [2024-07-25 13:52:31.501293] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.498 [2024-07-25 13:52:31.501314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.498 [2024-07-25 13:52:31.501332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.499 [2024-07-25 13:52:31.504203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.499 [2024-07-25 13:52:31.513688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.499 [2024-07-25 13:52:31.514097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.499 [2024-07-25 13:52:31.514125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.499 [2024-07-25 13:52:31.514141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.499 [2024-07-25 13:52:31.514375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.499 [2024-07-25 13:52:31.514563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.499 [2024-07-25 13:52:31.514583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.499 [2024-07-25 13:52:31.514595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.499 [2024-07-25 13:52:31.517493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.499 [2024-07-25 13:52:31.526769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.499 [2024-07-25 13:52:31.527181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.499 [2024-07-25 13:52:31.527210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.499 [2024-07-25 13:52:31.527227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.499 [2024-07-25 13:52:31.527449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.499 [2024-07-25 13:52:31.527688] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.499 [2024-07-25 13:52:31.527708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.499 [2024-07-25 13:52:31.527720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.499 [2024-07-25 13:52:31.530842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.759 [2024-07-25 13:52:31.540020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.759 [2024-07-25 13:52:31.540455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.759 [2024-07-25 13:52:31.540484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.759 [2024-07-25 13:52:31.540500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.759 [2024-07-25 13:52:31.540734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.759 [2024-07-25 13:52:31.540938] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.759 [2024-07-25 13:52:31.540958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.759 [2024-07-25 13:52:31.540970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.759 [2024-07-25 13:52:31.543868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.759 [2024-07-25 13:52:31.553017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.759 [2024-07-25 13:52:31.553373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.759 [2024-07-25 13:52:31.553401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.759 [2024-07-25 13:52:31.553416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.759 [2024-07-25 13:52:31.553650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.759 [2024-07-25 13:52:31.553853] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.759 [2024-07-25 13:52:31.553873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.759 [2024-07-25 13:52:31.553885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.759 [2024-07-25 13:52:31.556798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.759 [2024-07-25 13:52:31.565982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.759 [2024-07-25 13:52:31.566334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.759 [2024-07-25 13:52:31.566362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.759 [2024-07-25 13:52:31.566378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.759 [2024-07-25 13:52:31.566612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.759 [2024-07-25 13:52:31.566815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.759 [2024-07-25 13:52:31.566835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.759 [2024-07-25 13:52:31.566848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.759 [2024-07-25 13:52:31.569744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.579175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.579581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.760 [2024-07-25 13:52:31.579609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.760 [2024-07-25 13:52:31.579624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.760 [2024-07-25 13:52:31.579858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.760 [2024-07-25 13:52:31.580088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.760 [2024-07-25 13:52:31.580124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.760 [2024-07-25 13:52:31.580138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.760 [2024-07-25 13:52:31.583123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.592281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.592722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.760 [2024-07-25 13:52:31.592749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.760 [2024-07-25 13:52:31.592765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.760 [2024-07-25 13:52:31.592999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.760 [2024-07-25 13:52:31.593246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.760 [2024-07-25 13:52:31.593269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.760 [2024-07-25 13:52:31.593283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.760 [2024-07-25 13:52:31.596785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.605415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.605760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.760 [2024-07-25 13:52:31.605787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.760 [2024-07-25 13:52:31.605802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.760 [2024-07-25 13:52:31.606031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.760 [2024-07-25 13:52:31.606266] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.760 [2024-07-25 13:52:31.606290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.760 [2024-07-25 13:52:31.606304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.760 [2024-07-25 13:52:31.609532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.618694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.619102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.760 [2024-07-25 13:52:31.619142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.760 [2024-07-25 13:52:31.619161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.760 [2024-07-25 13:52:31.619400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.760 [2024-07-25 13:52:31.619603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.760 [2024-07-25 13:52:31.619623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.760 [2024-07-25 13:52:31.619635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.760 [2024-07-25 13:52:31.622472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.631822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.632229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.760 [2024-07-25 13:52:31.632258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.760 [2024-07-25 13:52:31.632275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.760 [2024-07-25 13:52:31.632511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.760 [2024-07-25 13:52:31.632714] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.760 [2024-07-25 13:52:31.632734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.760 [2024-07-25 13:52:31.632747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.760 [2024-07-25 13:52:31.635660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.644887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.645304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.760 [2024-07-25 13:52:31.645332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.760 [2024-07-25 13:52:31.645347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.760 [2024-07-25 13:52:31.645581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.760 [2024-07-25 13:52:31.645784] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.760 [2024-07-25 13:52:31.645804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.760 [2024-07-25 13:52:31.645817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.760 [2024-07-25 13:52:31.648734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.657962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.658314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.760 [2024-07-25 13:52:31.658342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.760 [2024-07-25 13:52:31.658358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.760 [2024-07-25 13:52:31.658592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.760 [2024-07-25 13:52:31.658794] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.760 [2024-07-25 13:52:31.658814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.760 [2024-07-25 13:52:31.658826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.760 [2024-07-25 13:52:31.661745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.670972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.671387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.760 [2024-07-25 13:52:31.671415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.760 [2024-07-25 13:52:31.671431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.760 [2024-07-25 13:52:31.671665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.760 [2024-07-25 13:52:31.671870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.760 [2024-07-25 13:52:31.671890] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.760 [2024-07-25 13:52:31.671903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.760 [2024-07-25 13:52:31.674821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.684053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.684464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.760 [2024-07-25 13:52:31.684496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.760 [2024-07-25 13:52:31.684512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.760 [2024-07-25 13:52:31.684746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.760 [2024-07-25 13:52:31.684949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.760 [2024-07-25 13:52:31.684969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.760 [2024-07-25 13:52:31.684981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.760 [2024-07-25 13:52:31.687899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.760 [2024-07-25 13:52:31.697192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.760 [2024-07-25 13:52:31.697601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.761 [2024-07-25 13:52:31.697629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.761 [2024-07-25 13:52:31.697645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.761 [2024-07-25 13:52:31.697880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.761 [2024-07-25 13:52:31.698126] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.761 [2024-07-25 13:52:31.698149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.761 [2024-07-25 13:52:31.698163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.761 [2024-07-25 13:52:31.701036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.761 [2024-07-25 13:52:31.710433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.761 [2024-07-25 13:52:31.710778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.761 [2024-07-25 13:52:31.710806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.761 [2024-07-25 13:52:31.710821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.761 [2024-07-25 13:52:31.711055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.761 [2024-07-25 13:52:31.711282] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.761 [2024-07-25 13:52:31.711303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.761 [2024-07-25 13:52:31.711316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.761 [2024-07-25 13:52:31.714202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.761 [2024-07-25 13:52:31.723480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.761 [2024-07-25 13:52:31.723896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.761 [2024-07-25 13:52:31.723924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.761 [2024-07-25 13:52:31.723940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.761 [2024-07-25 13:52:31.724195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.761 [2024-07-25 13:52:31.724415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.761 [2024-07-25 13:52:31.724436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.761 [2024-07-25 13:52:31.724450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.761 [2024-07-25 13:52:31.727324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.761 [2024-07-25 13:52:31.736537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.761 [2024-07-25 13:52:31.736910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.761 [2024-07-25 13:52:31.736937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.761 [2024-07-25 13:52:31.736952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.761 [2024-07-25 13:52:31.737201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.761 [2024-07-25 13:52:31.737425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.761 [2024-07-25 13:52:31.737445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.761 [2024-07-25 13:52:31.737458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.761 [2024-07-25 13:52:31.740310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.761 [2024-07-25 13:52:31.749517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.761 [2024-07-25 13:52:31.749921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.761 [2024-07-25 13:52:31.749948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.761 [2024-07-25 13:52:31.749963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.761 [2024-07-25 13:52:31.750221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.761 [2024-07-25 13:52:31.750449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.761 [2024-07-25 13:52:31.750469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.761 [2024-07-25 13:52:31.750481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.761 [2024-07-25 13:52:31.753334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.761 [2024-07-25 13:52:31.762615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.761 [2024-07-25 13:52:31.763020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.761 [2024-07-25 13:52:31.763047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.761 [2024-07-25 13:52:31.763073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.761 [2024-07-25 13:52:31.763330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.761 [2024-07-25 13:52:31.763549] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.761 [2024-07-25 13:52:31.763569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.761 [2024-07-25 13:52:31.763581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.761 [2024-07-25 13:52:31.766333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.761 [2024-07-25 13:52:31.775612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.761 [2024-07-25 13:52:31.776024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.761 [2024-07-25 13:52:31.776050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.761 [2024-07-25 13:52:31.776092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.761 [2024-07-25 13:52:31.776348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.761 [2024-07-25 13:52:31.776570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.761 [2024-07-25 13:52:31.776590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.761 [2024-07-25 13:52:31.776603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.761 [2024-07-25 13:52:31.779471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.761 [2024-07-25 13:52:31.788696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.761 [2024-07-25 13:52:31.789157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.761 [2024-07-25 13:52:31.789186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:34.761 [2024-07-25 13:52:31.789201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:34.761 [2024-07-25 13:52:31.789448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:34.761 [2024-07-25 13:52:31.789635] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.761 [2024-07-25 13:52:31.789655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.761 [2024-07-25 13:52:31.789668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.761 [2024-07-25 13:52:31.792776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.023 [2024-07-25 13:52:31.802047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.023 [2024-07-25 13:52:31.802394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.023 [2024-07-25 13:52:31.802422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.023 [2024-07-25 13:52:31.802438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.023 [2024-07-25 13:52:31.802655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.023 [2024-07-25 13:52:31.802859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.023 [2024-07-25 13:52:31.802879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.023 [2024-07-25 13:52:31.802891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.023 [2024-07-25 13:52:31.805800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.023 [2024-07-25 13:52:31.815281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.023 [2024-07-25 13:52:31.815609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.023 [2024-07-25 13:52:31.815636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.023 [2024-07-25 13:52:31.815656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.023 [2024-07-25 13:52:31.815852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.023 [2024-07-25 13:52:31.816117] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.023 [2024-07-25 13:52:31.816139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.023 [2024-07-25 13:52:31.816152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.023 [2024-07-25 13:52:31.819016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.023 [2024-07-25 13:52:31.828335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.023 [2024-07-25 13:52:31.828750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.023 [2024-07-25 13:52:31.828779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.023 [2024-07-25 13:52:31.828795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.023 [2024-07-25 13:52:31.829030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.023 [2024-07-25 13:52:31.829274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.023 [2024-07-25 13:52:31.829296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.023 [2024-07-25 13:52:31.829310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.023 [2024-07-25 13:52:31.832202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.023 [2024-07-25 13:52:31.841648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.023 [2024-07-25 13:52:31.841963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.023 [2024-07-25 13:52:31.841997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.023 [2024-07-25 13:52:31.842031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.023 [2024-07-25 13:52:31.842285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.023 [2024-07-25 13:52:31.842503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.023 [2024-07-25 13:52:31.842523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.023 [2024-07-25 13:52:31.842536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.023 [2024-07-25 13:52:31.845632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.023 [2024-07-25 13:52:31.854973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.023 [2024-07-25 13:52:31.855313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.023 [2024-07-25 13:52:31.855342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.023 [2024-07-25 13:52:31.855374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.023 [2024-07-25 13:52:31.855608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.023 [2024-07-25 13:52:31.855818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.023 [2024-07-25 13:52:31.855846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.023 [2024-07-25 13:52:31.855860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.023 [2024-07-25 13:52:31.858940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.023 [2024-07-25 13:52:31.868556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.023 [2024-07-25 13:52:31.868971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.023 [2024-07-25 13:52:31.869000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.023 [2024-07-25 13:52:31.869016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.023 [2024-07-25 13:52:31.869240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.023 [2024-07-25 13:52:31.869477] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.023 [2024-07-25 13:52:31.869497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.023 [2024-07-25 13:52:31.869509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.023 [2024-07-25 13:52:31.872621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.023 [2024-07-25 13:52:31.881935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.023 [2024-07-25 13:52:31.882300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.023 [2024-07-25 13:52:31.882329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.023 [2024-07-25 13:52:31.882345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.023 [2024-07-25 13:52:31.882581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.023 [2024-07-25 13:52:31.882789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.023 [2024-07-25 13:52:31.882810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.023 [2024-07-25 13:52:31.882822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.023 [2024-07-25 13:52:31.885983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.023 [2024-07-25 13:52:31.895274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.023 [2024-07-25 13:52:31.895638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.023 [2024-07-25 13:52:31.895667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.023 [2024-07-25 13:52:31.895683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.023 [2024-07-25 13:52:31.895917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:31.896150] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.024 [2024-07-25 13:52:31.896172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.024 [2024-07-25 13:52:31.896185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.024 [2024-07-25 13:52:31.899164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.024 [2024-07-25 13:52:31.908733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.024 [2024-07-25 13:52:31.909122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.024 [2024-07-25 13:52:31.909151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.024 [2024-07-25 13:52:31.909167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.024 [2024-07-25 13:52:31.909408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:31.909617] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.024 [2024-07-25 13:52:31.909637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.024 [2024-07-25 13:52:31.909650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.024 [2024-07-25 13:52:31.912770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.024 [2024-07-25 13:52:31.922196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.024 [2024-07-25 13:52:31.922628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.024 [2024-07-25 13:52:31.922658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.024 [2024-07-25 13:52:31.922689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.024 [2024-07-25 13:52:31.922924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:31.923163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.024 [2024-07-25 13:52:31.923187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.024 [2024-07-25 13:52:31.923202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.024 [2024-07-25 13:52:31.926281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.024 [2024-07-25 13:52:31.935667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.024 [2024-07-25 13:52:31.936026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.024 [2024-07-25 13:52:31.936055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.024 [2024-07-25 13:52:31.936084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.024 [2024-07-25 13:52:31.936314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:31.936544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.024 [2024-07-25 13:52:31.936565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.024 [2024-07-25 13:52:31.936577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.024 [2024-07-25 13:52:31.939584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.024 [2024-07-25 13:52:31.949035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.024 [2024-07-25 13:52:31.949418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.024 [2024-07-25 13:52:31.949446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.024 [2024-07-25 13:52:31.949462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.024 [2024-07-25 13:52:31.949701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:31.949895] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.024 [2024-07-25 13:52:31.949916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.024 [2024-07-25 13:52:31.949929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.024 [2024-07-25 13:52:31.952934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.024 [2024-07-25 13:52:31.962385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.024 [2024-07-25 13:52:31.962800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.024 [2024-07-25 13:52:31.962828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.024 [2024-07-25 13:52:31.962844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.024 [2024-07-25 13:52:31.963100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:31.963321] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.024 [2024-07-25 13:52:31.963342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.024 [2024-07-25 13:52:31.963357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.024 [2024-07-25 13:52:31.966479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.024 [2024-07-25 13:52:31.975663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.024 [2024-07-25 13:52:31.976037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.024 [2024-07-25 13:52:31.976073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.024 [2024-07-25 13:52:31.976106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.024 [2024-07-25 13:52:31.976336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:31.976560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.024 [2024-07-25 13:52:31.976580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.024 [2024-07-25 13:52:31.976593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.024 [2024-07-25 13:52:31.979608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.024 [2024-07-25 13:52:31.988842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.024 [2024-07-25 13:52:31.989222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.024 [2024-07-25 13:52:31.989250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.024 [2024-07-25 13:52:31.989266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.024 [2024-07-25 13:52:31.989483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:31.989685] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.024 [2024-07-25 13:52:31.989705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.024 [2024-07-25 13:52:31.989722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.024 [2024-07-25 13:52:31.992620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.024 [2024-07-25 13:52:32.001886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.024 [2024-07-25 13:52:32.002302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.024 [2024-07-25 13:52:32.002330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.024 [2024-07-25 13:52:32.002346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.024 [2024-07-25 13:52:32.002579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:32.002782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.024 [2024-07-25 13:52:32.002811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.024 [2024-07-25 13:52:32.002824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.024 [2024-07-25 13:52:32.005739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.024 [2024-07-25 13:52:32.014967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.024 [2024-07-25 13:52:32.015318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.024 [2024-07-25 13:52:32.015346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.024 [2024-07-25 13:52:32.015362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.024 [2024-07-25 13:52:32.015596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.024 [2024-07-25 13:52:32.015799] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.025 [2024-07-25 13:52:32.015818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.025 [2024-07-25 13:52:32.015832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.025 [2024-07-25 13:52:32.018732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.025 [2024-07-25 13:52:32.027963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.025 [2024-07-25 13:52:32.028377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.025 [2024-07-25 13:52:32.028405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.025 [2024-07-25 13:52:32.028421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.025 [2024-07-25 13:52:32.028656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.025 [2024-07-25 13:52:32.028859] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.025 [2024-07-25 13:52:32.028879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.025 [2024-07-25 13:52:32.028892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.025 [2024-07-25 13:52:32.031790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.025 [2024-07-25 13:52:32.041027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.025 [2024-07-25 13:52:32.041385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.025 [2024-07-25 13:52:32.041415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.025 [2024-07-25 13:52:32.041431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.025 [2024-07-25 13:52:32.041641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.025 [2024-07-25 13:52:32.041843] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.025 [2024-07-25 13:52:32.041863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.025 [2024-07-25 13:52:32.041876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.025 [2024-07-25 13:52:32.044751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.025 [2024-07-25 13:52:32.054508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.025 [2024-07-25 13:52:32.054854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.025 [2024-07-25 13:52:32.054898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.025 [2024-07-25 13:52:32.054915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.025 [2024-07-25 13:52:32.055171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.025 [2024-07-25 13:52:32.055394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.025 [2024-07-25 13:52:32.055415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.025 [2024-07-25 13:52:32.055443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.286 [2024-07-25 13:52:32.058511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.286 [2024-07-25 13:52:32.067740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.286 [2024-07-25 13:52:32.068222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.286 [2024-07-25 13:52:32.068252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.286 [2024-07-25 13:52:32.068269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.286 [2024-07-25 13:52:32.068507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.286 [2024-07-25 13:52:32.068712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.286 [2024-07-25 13:52:32.068732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.286 [2024-07-25 13:52:32.068746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.286 [2024-07-25 13:52:32.071652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.286 [2024-07-25 13:52:32.080698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.286 [2024-07-25 13:52:32.081026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.286 [2024-07-25 13:52:32.081097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.286 [2024-07-25 13:52:32.081113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.286 [2024-07-25 13:52:32.081341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.286 [2024-07-25 13:52:32.081548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.286 [2024-07-25 13:52:32.081568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.081581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.287 [2024-07-25 13:52:32.084373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.287 [2024-07-25 13:52:32.093810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.287 [2024-07-25 13:52:32.094155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.287 [2024-07-25 13:52:32.094183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.287 [2024-07-25 13:52:32.094199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.287 [2024-07-25 13:52:32.094410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.287 [2024-07-25 13:52:32.094612] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.287 [2024-07-25 13:52:32.094632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.094644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.287 [2024-07-25 13:52:32.097524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.287 [2024-07-25 13:52:32.107136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.287 [2024-07-25 13:52:32.107478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.287 [2024-07-25 13:52:32.107506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.287 [2024-07-25 13:52:32.107520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.287 [2024-07-25 13:52:32.107736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.287 [2024-07-25 13:52:32.107940] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.287 [2024-07-25 13:52:32.107960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.107973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.287 [2024-07-25 13:52:32.110941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.287 [2024-07-25 13:52:32.120439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.287 [2024-07-25 13:52:32.120837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.287 [2024-07-25 13:52:32.120891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.287 [2024-07-25 13:52:32.120907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.287 [2024-07-25 13:52:32.121165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.287 [2024-07-25 13:52:32.121390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.287 [2024-07-25 13:52:32.121425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.121438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.287 [2024-07-25 13:52:32.124418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.287 [2024-07-25 13:52:32.133547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.287 [2024-07-25 13:52:32.133896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.287 [2024-07-25 13:52:32.133924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.287 [2024-07-25 13:52:32.133939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.287 [2024-07-25 13:52:32.134175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.287 [2024-07-25 13:52:32.134407] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.287 [2024-07-25 13:52:32.134441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.134455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.287 [2024-07-25 13:52:32.137318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.287 [2024-07-25 13:52:32.146547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.287 [2024-07-25 13:52:32.147017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.287 [2024-07-25 13:52:32.147079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.287 [2024-07-25 13:52:32.147097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.287 [2024-07-25 13:52:32.147339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.287 [2024-07-25 13:52:32.147526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.287 [2024-07-25 13:52:32.147545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.147557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.287 [2024-07-25 13:52:32.150350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.287 [2024-07-25 13:52:32.159673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.287 [2024-07-25 13:52:32.160070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.287 [2024-07-25 13:52:32.160099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.287 [2024-07-25 13:52:32.160130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.287 [2024-07-25 13:52:32.160373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.287 [2024-07-25 13:52:32.160560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.287 [2024-07-25 13:52:32.160580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.160592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.287 [2024-07-25 13:52:32.163385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.287 [2024-07-25 13:52:32.172660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.287 [2024-07-25 13:52:32.172970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.287 [2024-07-25 13:52:32.172997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.287 [2024-07-25 13:52:32.173017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.287 [2024-07-25 13:52:32.173268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.287 [2024-07-25 13:52:32.173492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.287 [2024-07-25 13:52:32.173512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.173525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.287 [2024-07-25 13:52:32.176391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.287 [2024-07-25 13:52:32.185740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.287 [2024-07-25 13:52:32.186112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.287 [2024-07-25 13:52:32.186138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.287 [2024-07-25 13:52:32.186153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.287 [2024-07-25 13:52:32.186368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.287 [2024-07-25 13:52:32.186571] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.287 [2024-07-25 13:52:32.186591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.186604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.287 [2024-07-25 13:52:32.189486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.287 [2024-07-25 13:52:32.198878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.287 [2024-07-25 13:52:32.199233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.287 [2024-07-25 13:52:32.199260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.287 [2024-07-25 13:52:32.199276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.287 [2024-07-25 13:52:32.199510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.287 [2024-07-25 13:52:32.199712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.287 [2024-07-25 13:52:32.199733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.287 [2024-07-25 13:52:32.199745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.288 [2024-07-25 13:52:32.202643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.288 [2024-07-25 13:52:32.212213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.288 [2024-07-25 13:52:32.212612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.288 [2024-07-25 13:52:32.212640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.288 [2024-07-25 13:52:32.212655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.288 [2024-07-25 13:52:32.212891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.288 [2024-07-25 13:52:32.213141] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.288 [2024-07-25 13:52:32.213166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.288 [2024-07-25 13:52:32.213180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.288 [2024-07-25 13:52:32.216160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.288 [2024-07-25 13:52:32.225745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.288 [2024-07-25 13:52:32.226151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.288 [2024-07-25 13:52:32.226184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.288 [2024-07-25 13:52:32.226200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.288 [2024-07-25 13:52:32.226431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.288 [2024-07-25 13:52:32.226641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.288 [2024-07-25 13:52:32.226660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.288 [2024-07-25 13:52:32.226673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.288 [2024-07-25 13:52:32.229864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.288 [2024-07-25 13:52:32.239291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.288 [2024-07-25 13:52:32.239652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.288 [2024-07-25 13:52:32.239681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.288 [2024-07-25 13:52:32.239698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.288 [2024-07-25 13:52:32.239913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.288 [2024-07-25 13:52:32.240171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.288 [2024-07-25 13:52:32.240194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.288 [2024-07-25 13:52:32.240208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.288 [2024-07-25 13:52:32.243525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.288 [2024-07-25 13:52:32.252920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.288 [2024-07-25 13:52:32.253254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.288 [2024-07-25 13:52:32.253283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.288 [2024-07-25 13:52:32.253299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.288 [2024-07-25 13:52:32.253542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.288 [2024-07-25 13:52:32.253774] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.288 [2024-07-25 13:52:32.253797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.288 [2024-07-25 13:52:32.253811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.288 [2024-07-25 13:52:32.257097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.288 [2024-07-25 13:52:32.266404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.288 [2024-07-25 13:52:32.266743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.288 [2024-07-25 13:52:32.266772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.288 [2024-07-25 13:52:32.266788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.288 [2024-07-25 13:52:32.267017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.288 [2024-07-25 13:52:32.267329] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.288 [2024-07-25 13:52:32.267368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.288 [2024-07-25 13:52:32.267383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.288 [2024-07-25 13:52:32.270631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.288 [2024-07-25 13:52:32.280000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.288 [2024-07-25 13:52:32.280330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.288 [2024-07-25 13:52:32.280360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.288 [2024-07-25 13:52:32.280376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.288 [2024-07-25 13:52:32.280605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.288 [2024-07-25 13:52:32.280836] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.288 [2024-07-25 13:52:32.280857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.288 [2024-07-25 13:52:32.280870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.288 [2024-07-25 13:52:32.283944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.288 [2024-07-25 13:52:32.293560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.288 [2024-07-25 13:52:32.293960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.288 [2024-07-25 13:52:32.293988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.288 [2024-07-25 13:52:32.294004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.288 [2024-07-25 13:52:32.294230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.288 [2024-07-25 13:52:32.294473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.288 [2024-07-25 13:52:32.294495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.288 [2024-07-25 13:52:32.294509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.288 [2024-07-25 13:52:32.297729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.288 [2024-07-25 13:52:32.307116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.288 [2024-07-25 13:52:32.307520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.288 [2024-07-25 13:52:32.307548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.288 [2024-07-25 13:52:32.307569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.288 [2024-07-25 13:52:32.307813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.288 [2024-07-25 13:52:32.308012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.288 [2024-07-25 13:52:32.308034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.288 [2024-07-25 13:52:32.308073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.288 [2024-07-25 13:52:32.311380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.550 [2024-07-25 13:52:32.320829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.550 [2024-07-25 13:52:32.321148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.550 [2024-07-25 13:52:32.321178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.550 [2024-07-25 13:52:32.321195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.550 [2024-07-25 13:52:32.321429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.550 [2024-07-25 13:52:32.321682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.550 [2024-07-25 13:52:32.321704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.550 [2024-07-25 13:52:32.321718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.550 [2024-07-25 13:52:32.324921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.550 [2024-07-25 13:52:32.334075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.550 [2024-07-25 13:52:32.334538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.550 [2024-07-25 13:52:32.334566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.550 [2024-07-25 13:52:32.334581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.550 [2024-07-25 13:52:32.334816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.550 [2024-07-25 13:52:32.335019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.550 [2024-07-25 13:52:32.335039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.550 [2024-07-25 13:52:32.335075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.550 [2024-07-25 13:52:32.338106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.550 [2024-07-25 13:52:32.347308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.550 [2024-07-25 13:52:32.347669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.550 [2024-07-25 13:52:32.347698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.550 [2024-07-25 13:52:32.347713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.550 [2024-07-25 13:52:32.347962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.550 [2024-07-25 13:52:32.348202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.550 [2024-07-25 13:52:32.348229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.550 [2024-07-25 13:52:32.348244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.550 [2024-07-25 13:52:32.351662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.550 [2024-07-25 13:52:32.360574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.550 [2024-07-25 13:52:32.360952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.550 [2024-07-25 13:52:32.360979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.550 [2024-07-25 13:52:32.360995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.550 [2024-07-25 13:52:32.361262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.550 [2024-07-25 13:52:32.361469] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.550 [2024-07-25 13:52:32.361490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.550 [2024-07-25 13:52:32.361503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.551 [2024-07-25 13:52:32.364439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.551 [2024-07-25 13:52:32.373762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.551 [2024-07-25 13:52:32.374170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.551 [2024-07-25 13:52:32.374198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.551 [2024-07-25 13:52:32.374213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.551 [2024-07-25 13:52:32.374448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.551 [2024-07-25 13:52:32.374651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.551 [2024-07-25 13:52:32.374672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.551 [2024-07-25 13:52:32.374686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.551 [2024-07-25 13:52:32.377634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.551 [2024-07-25 13:52:32.386855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.551 [2024-07-25 13:52:32.387273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.551 [2024-07-25 13:52:32.387301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.551 [2024-07-25 13:52:32.387317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.551 [2024-07-25 13:52:32.387555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.551 [2024-07-25 13:52:32.387757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.551 [2024-07-25 13:52:32.387778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.551 [2024-07-25 13:52:32.387791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.551 [2024-07-25 13:52:32.390689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.551 [2024-07-25 13:52:32.400156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.551 [2024-07-25 13:52:32.400536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.551 [2024-07-25 13:52:32.400563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.551 [2024-07-25 13:52:32.400578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.551 [2024-07-25 13:52:32.400793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.551 [2024-07-25 13:52:32.400996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.551 [2024-07-25 13:52:32.401016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.551 [2024-07-25 13:52:32.401028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.551 [2024-07-25 13:52:32.403938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.551 [2024-07-25 13:52:32.413309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.551 [2024-07-25 13:52:32.413670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.551 [2024-07-25 13:52:32.413698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.551 [2024-07-25 13:52:32.413713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.551 [2024-07-25 13:52:32.413947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.551 [2024-07-25 13:52:32.414197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.551 [2024-07-25 13:52:32.414219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.551 [2024-07-25 13:52:32.414233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.551 [2024-07-25 13:52:32.417119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.551 [2024-07-25 13:52:32.426518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.551 [2024-07-25 13:52:32.426926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.551 [2024-07-25 13:52:32.426953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.551 [2024-07-25 13:52:32.426969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.551 [2024-07-25 13:52:32.427233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.551 [2024-07-25 13:52:32.427464] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.551 [2024-07-25 13:52:32.427483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.551 [2024-07-25 13:52:32.427495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.551 [2024-07-25 13:52:32.430416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.551 [2024-07-25 13:52:32.439859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.551 [2024-07-25 13:52:32.440243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.551 [2024-07-25 13:52:32.440272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.551 [2024-07-25 13:52:32.440288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.551 [2024-07-25 13:52:32.440543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.551 [2024-07-25 13:52:32.440747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.551 [2024-07-25 13:52:32.440766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.551 [2024-07-25 13:52:32.440778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.551 [2024-07-25 13:52:32.443774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.551 [2024-07-25 13:52:32.453281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.551 [2024-07-25 13:52:32.453662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.551 [2024-07-25 13:52:32.453690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.551 [2024-07-25 13:52:32.453706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.551 [2024-07-25 13:52:32.453945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.551 [2024-07-25 13:52:32.454202] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.551 [2024-07-25 13:52:32.454223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.551 [2024-07-25 13:52:32.454237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.551 [2024-07-25 13:52:32.457319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.551 [2024-07-25 13:52:32.466513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.551 [2024-07-25 13:52:32.466840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.551 [2024-07-25 13:52:32.466868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.551 [2024-07-25 13:52:32.466883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.551 [2024-07-25 13:52:32.467134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.551 [2024-07-25 13:52:32.467367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.551 [2024-07-25 13:52:32.467388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.551 [2024-07-25 13:52:32.467403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.551 [2024-07-25 13:52:32.470435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.551 [2024-07-25 13:52:32.479781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.551 [2024-07-25 13:52:32.480135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.551 [2024-07-25 13:52:32.480164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.551 [2024-07-25 13:52:32.480180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.551 [2024-07-25 13:52:32.480419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.551 [2024-07-25 13:52:32.480636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.551 [2024-07-25 13:52:32.480657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.552 [2024-07-25 13:52:32.480675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.552 [2024-07-25 13:52:32.483641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.552 [2024-07-25 13:52:32.493093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.552 [2024-07-25 13:52:32.493454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.552 [2024-07-25 13:52:32.493482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.552 [2024-07-25 13:52:32.493498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.552 [2024-07-25 13:52:32.493740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.552 [2024-07-25 13:52:32.493949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.552 [2024-07-25 13:52:32.493970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.552 [2024-07-25 13:52:32.493983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.552 [2024-07-25 13:52:32.496981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.552 [2024-07-25 13:52:32.506267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.552 [2024-07-25 13:52:32.506634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.552 [2024-07-25 13:52:32.506662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.552 [2024-07-25 13:52:32.506678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.552 [2024-07-25 13:52:32.506912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.552 [2024-07-25 13:52:32.507171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.552 [2024-07-25 13:52:32.507194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.552 [2024-07-25 13:52:32.507207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.552 [2024-07-25 13:52:32.510183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.552 [2024-07-25 13:52:32.519486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.552 [2024-07-25 13:52:32.519871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.552 [2024-07-25 13:52:32.519898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.552 [2024-07-25 13:52:32.519914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.552 [2024-07-25 13:52:32.520165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.552 [2024-07-25 13:52:32.520392] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.552 [2024-07-25 13:52:32.520412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.552 [2024-07-25 13:52:32.520425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.552 [2024-07-25 13:52:32.523382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.552 [2024-07-25 13:52:32.532757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.552 [2024-07-25 13:52:32.533173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.552 [2024-07-25 13:52:32.533210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.552 [2024-07-25 13:52:32.533227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.552 [2024-07-25 13:52:32.533467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.552 [2024-07-25 13:52:32.533675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.552 [2024-07-25 13:52:32.533696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.552 [2024-07-25 13:52:32.533710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.552 [2024-07-25 13:52:32.536698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.552 [2024-07-25 13:52:32.545936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.552 [2024-07-25 13:52:32.546253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.552 [2024-07-25 13:52:32.546295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.552 [2024-07-25 13:52:32.546311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.552 [2024-07-25 13:52:32.546536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.552 [2024-07-25 13:52:32.546747] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.552 [2024-07-25 13:52:32.546768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.552 [2024-07-25 13:52:32.546781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.552 [2024-07-25 13:52:32.549762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.552 [2024-07-25 13:52:32.559152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.552 [2024-07-25 13:52:32.559512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.552 [2024-07-25 13:52:32.559540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.552 [2024-07-25 13:52:32.559555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.552 [2024-07-25 13:52:32.559776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.552 [2024-07-25 13:52:32.559986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.552 [2024-07-25 13:52:32.560006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.552 [2024-07-25 13:52:32.560019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.552 [2024-07-25 13:52:32.563008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.552 [2024-07-25 13:52:32.572418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.552 [2024-07-25 13:52:32.572800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.552 [2024-07-25 13:52:32.572828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.552 [2024-07-25 13:52:32.572843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.552 [2024-07-25 13:52:32.573074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.552 [2024-07-25 13:52:32.573284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.552 [2024-07-25 13:52:32.573306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.552 [2024-07-25 13:52:32.573319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.552 [2024-07-25 13:52:32.576267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.815 [2024-07-25 13:52:32.585880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.815 [2024-07-25 13:52:32.586321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.815 [2024-07-25 13:52:32.586350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.815 [2024-07-25 13:52:32.586381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.815 [2024-07-25 13:52:32.586619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.815 [2024-07-25 13:52:32.586829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.815 [2024-07-25 13:52:32.586848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.815 [2024-07-25 13:52:32.586860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.815 [2024-07-25 13:52:32.589978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.815 [2024-07-25 13:52:32.599155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.815 [2024-07-25 13:52:32.599604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.815 [2024-07-25 13:52:32.599633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.815 [2024-07-25 13:52:32.599649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.815 [2024-07-25 13:52:32.599892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.815 [2024-07-25 13:52:32.600159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.815 [2024-07-25 13:52:32.600183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.815 [2024-07-25 13:52:32.600197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.815 [2024-07-25 13:52:32.603561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.815 [2024-07-25 13:52:32.612558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.815 [2024-07-25 13:52:32.612974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.815 [2024-07-25 13:52:32.613001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.815 [2024-07-25 13:52:32.613017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.815 [2024-07-25 13:52:32.613268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.815 [2024-07-25 13:52:32.613503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.815 [2024-07-25 13:52:32.613524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.815 [2024-07-25 13:52:32.613537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.815 [2024-07-25 13:52:32.616618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.815 [2024-07-25 13:52:32.626054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.815 [2024-07-25 13:52:32.626452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.815 [2024-07-25 13:52:32.626480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.815 [2024-07-25 13:52:32.626495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.815 [2024-07-25 13:52:32.626735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.815 [2024-07-25 13:52:32.626928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.815 [2024-07-25 13:52:32.626948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.815 [2024-07-25 13:52:32.626961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.815 [2024-07-25 13:52:32.629959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.815 [2024-07-25 13:52:32.639350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.815 [2024-07-25 13:52:32.639674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.815 [2024-07-25 13:52:32.639701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.815 [2024-07-25 13:52:32.639716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.815 [2024-07-25 13:52:32.639932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.815 [2024-07-25 13:52:32.640172] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.815 [2024-07-25 13:52:32.640193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.815 [2024-07-25 13:52:32.640206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.815 [2024-07-25 13:52:32.643157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.815 [2024-07-25 13:52:32.652545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.815 [2024-07-25 13:52:32.652900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.815 [2024-07-25 13:52:32.652929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.815 [2024-07-25 13:52:32.652945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.815 [2024-07-25 13:52:32.653197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.815 [2024-07-25 13:52:32.653415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.815 [2024-07-25 13:52:32.653436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.815 [2024-07-25 13:52:32.653448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.815 [2024-07-25 13:52:32.656397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.815 [2024-07-25 13:52:32.665807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.816 [2024-07-25 13:52:32.666158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.816 [2024-07-25 13:52:32.666187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.816 [2024-07-25 13:52:32.666208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.816 [2024-07-25 13:52:32.666451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.816 [2024-07-25 13:52:32.666644] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.816 [2024-07-25 13:52:32.666663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.816 [2024-07-25 13:52:32.666677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.816 [2024-07-25 13:52:32.669671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.816 [2024-07-25 13:52:32.679087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.816 [2024-07-25 13:52:32.679401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.816 [2024-07-25 13:52:32.679428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.816 [2024-07-25 13:52:32.679443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.816 [2024-07-25 13:52:32.679657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.816 [2024-07-25 13:52:32.679868] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.816 [2024-07-25 13:52:32.679888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.816 [2024-07-25 13:52:32.679901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.816 [2024-07-25 13:52:32.682891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.816 [2024-07-25 13:52:32.692311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.816 [2024-07-25 13:52:32.692739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.816 [2024-07-25 13:52:32.692768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.816 [2024-07-25 13:52:32.692785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.816 [2024-07-25 13:52:32.693026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.816 [2024-07-25 13:52:32.693276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.816 [2024-07-25 13:52:32.693300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.816 [2024-07-25 13:52:32.693314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.816 [2024-07-25 13:52:32.696286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.816 [2024-07-25 13:52:32.705536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.816 [2024-07-25 13:52:32.705948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.816 [2024-07-25 13:52:32.705976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.816 [2024-07-25 13:52:32.705991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.816 [2024-07-25 13:52:32.706244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.816 [2024-07-25 13:52:32.706483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.816 [2024-07-25 13:52:32.706508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.816 [2024-07-25 13:52:32.706521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.816 [2024-07-25 13:52:32.709472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.816 [2024-07-25 13:52:32.718733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.816 [2024-07-25 13:52:32.719025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.816 [2024-07-25 13:52:32.719075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.816 [2024-07-25 13:52:32.719092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.816 [2024-07-25 13:52:32.719329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.816 [2024-07-25 13:52:32.719557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.816 [2024-07-25 13:52:32.719576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.816 [2024-07-25 13:52:32.719589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.816 [2024-07-25 13:52:32.722566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.816 [2024-07-25 13:52:32.731986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.816 [2024-07-25 13:52:32.732364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.816 [2024-07-25 13:52:32.732407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.816 [2024-07-25 13:52:32.732423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.816 [2024-07-25 13:52:32.732651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.816 [2024-07-25 13:52:32.732845] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.816 [2024-07-25 13:52:32.732865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.816 [2024-07-25 13:52:32.732878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.816 [2024-07-25 13:52:32.735877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.816 [2024-07-25 13:52:32.745317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.816 [2024-07-25 13:52:32.745716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.816 [2024-07-25 13:52:32.745744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.816 [2024-07-25 13:52:32.745759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.816 [2024-07-25 13:52:32.745981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.816 [2024-07-25 13:52:32.746223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.816 [2024-07-25 13:52:32.746244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.816 [2024-07-25 13:52:32.746257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.816 [2024-07-25 13:52:32.749228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.816 [2024-07-25 13:52:32.758637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.816 [2024-07-25 13:52:32.758993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.816 [2024-07-25 13:52:32.759021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.816 [2024-07-25 13:52:32.759037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.816 [2024-07-25 13:52:32.759275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.816 [2024-07-25 13:52:32.759509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.816 [2024-07-25 13:52:32.759530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.816 [2024-07-25 13:52:32.759542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.816 [2024-07-25 13:52:32.762491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.816 [2024-07-25 13:52:32.771863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.816 [2024-07-25 13:52:32.772254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.816 [2024-07-25 13:52:32.772284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.817 [2024-07-25 13:52:32.772300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.817 [2024-07-25 13:52:32.772542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.817 [2024-07-25 13:52:32.772749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.817 [2024-07-25 13:52:32.772770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.817 [2024-07-25 13:52:32.772783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.817 [2024-07-25 13:52:32.775744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.817 [2024-07-25 13:52:32.785180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.817 [2024-07-25 13:52:32.785564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.817 [2024-07-25 13:52:32.785590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.817 [2024-07-25 13:52:32.785605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.817 [2024-07-25 13:52:32.785823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.817 [2024-07-25 13:52:32.786033] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.817 [2024-07-25 13:52:32.786077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.817 [2024-07-25 13:52:32.786091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.817 [2024-07-25 13:52:32.789078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.817 [2024-07-25 13:52:32.798494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.817 [2024-07-25 13:52:32.798852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.817 [2024-07-25 13:52:32.798880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.817 [2024-07-25 13:52:32.798896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.817 [2024-07-25 13:52:32.799155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.817 [2024-07-25 13:52:32.799377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.817 [2024-07-25 13:52:32.799398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.817 [2024-07-25 13:52:32.799426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.817 [2024-07-25 13:52:32.802416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.817 [2024-07-25 13:52:32.811733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.817 [2024-07-25 13:52:32.812116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.817 [2024-07-25 13:52:32.812144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.817 [2024-07-25 13:52:32.812161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.817 [2024-07-25 13:52:32.812382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.817 [2024-07-25 13:52:32.812592] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.817 [2024-07-25 13:52:32.812612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.817 [2024-07-25 13:52:32.812625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.817 [2024-07-25 13:52:32.815575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.817 [2024-07-25 13:52:32.825031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.817 [2024-07-25 13:52:32.825373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.817 [2024-07-25 13:52:32.825415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.817 [2024-07-25 13:52:32.825430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.817 [2024-07-25 13:52:32.825653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.817 [2024-07-25 13:52:32.825846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.817 [2024-07-25 13:52:32.825867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.817 [2024-07-25 13:52:32.825880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.817 [2024-07-25 13:52:32.828832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.817 [2024-07-25 13:52:32.838343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.817 [2024-07-25 13:52:32.838680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.817 [2024-07-25 13:52:32.838710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:35.817 [2024-07-25 13:52:32.838726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:35.817 [2024-07-25 13:52:32.838948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:35.817 [2024-07-25 13:52:32.839184] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:35.817 [2024-07-25 13:52:32.839206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:35.817 [2024-07-25 13:52:32.839224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.817 [2024-07-25 13:52:32.842230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.079 [2024-07-25 13:52:32.851714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.079 [2024-07-25 13:52:32.852103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.079 [2024-07-25 13:52:32.852133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.079 [2024-07-25 13:52:32.852150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.079 [2024-07-25 13:52:32.852365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.079 [2024-07-25 13:52:32.852612] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.080 [2024-07-25 13:52:32.852635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.080 [2024-07-25 13:52:32.852649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.080 [2024-07-25 13:52:32.855979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.080 [2024-07-25 13:52:32.865090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.080 [2024-07-25 13:52:32.865515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.080 [2024-07-25 13:52:32.865543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.080 [2024-07-25 13:52:32.865559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.080 [2024-07-25 13:52:32.865805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.080 [2024-07-25 13:52:32.866012] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.080 [2024-07-25 13:52:32.866032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.080 [2024-07-25 13:52:32.866068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.080 [2024-07-25 13:52:32.869154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.080 [2024-07-25 13:52:32.878451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.080 [2024-07-25 13:52:32.878800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.080 [2024-07-25 13:52:32.878828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.080 [2024-07-25 13:52:32.878844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.080 [2024-07-25 13:52:32.879091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.080 [2024-07-25 13:52:32.879319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.080 [2024-07-25 13:52:32.879362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.080 [2024-07-25 13:52:32.879377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.080 [2024-07-25 13:52:32.882407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.080 [2024-07-25 13:52:32.891710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.080 [2024-07-25 13:52:32.892101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.080 [2024-07-25 13:52:32.892131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.080 [2024-07-25 13:52:32.892147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.080 [2024-07-25 13:52:32.892369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.080 [2024-07-25 13:52:32.892579] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.080 [2024-07-25 13:52:32.892600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.080 [2024-07-25 13:52:32.892613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.080 [2024-07-25 13:52:32.895605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.080 [2024-07-25 13:52:32.905032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.080 [2024-07-25 13:52:32.905421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.080 [2024-07-25 13:52:32.905450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.080 [2024-07-25 13:52:32.905466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.080 [2024-07-25 13:52:32.905706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.080 [2024-07-25 13:52:32.905915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.080 [2024-07-25 13:52:32.905936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.080 [2024-07-25 13:52:32.905948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.080 [2024-07-25 13:52:32.908978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.080 [2024-07-25 13:52:32.918327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.080 [2024-07-25 13:52:32.918696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.080 [2024-07-25 13:52:32.918724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.080 [2024-07-25 13:52:32.918741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.080 [2024-07-25 13:52:32.918982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.080 [2024-07-25 13:52:32.919223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.080 [2024-07-25 13:52:32.919244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.080 [2024-07-25 13:52:32.919257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.080 [2024-07-25 13:52:32.922263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.080 [2024-07-25 13:52:32.931722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.080 [2024-07-25 13:52:32.932106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.080 [2024-07-25 13:52:32.932135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.080 [2024-07-25 13:52:32.932151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.080 [2024-07-25 13:52:32.932386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.080 [2024-07-25 13:52:32.932595] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.080 [2024-07-25 13:52:32.932616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.080 [2024-07-25 13:52:32.932629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.080 [2024-07-25 13:52:32.935605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.080 [2024-07-25 13:52:32.944985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.080 [2024-07-25 13:52:32.945399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.080 [2024-07-25 13:52:32.945428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.080 [2024-07-25 13:52:32.945444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.080 [2024-07-25 13:52:32.945666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.080 [2024-07-25 13:52:32.945875] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.080 [2024-07-25 13:52:32.945896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.080 [2024-07-25 13:52:32.945909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.080 [2024-07-25 13:52:32.948885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.080 [2024-07-25 13:52:32.958302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.080 [2024-07-25 13:52:32.958735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.080 [2024-07-25 13:52:32.958762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.080 [2024-07-25 13:52:32.958777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.080 [2024-07-25 13:52:32.959012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.080 [2024-07-25 13:52:32.959240] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.080 [2024-07-25 13:52:32.959263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.080 [2024-07-25 13:52:32.959277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.080 [2024-07-25 13:52:32.962249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.080 [2024-07-25 13:52:32.971561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.080 [2024-07-25 13:52:32.971923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.080 [2024-07-25 13:52:32.971950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.080 [2024-07-25 13:52:32.971967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.080 [2024-07-25 13:52:32.972220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.080 [2024-07-25 13:52:32.972439] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:32.972459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.081 [2024-07-25 13:52:32.972476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.081 [2024-07-25 13:52:32.975499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.081 [2024-07-25 13:52:32.984908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.081 [2024-07-25 13:52:32.985305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.081 [2024-07-25 13:52:32.985333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.081 [2024-07-25 13:52:32.985364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.081 [2024-07-25 13:52:32.985587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.081 [2024-07-25 13:52:32.985796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:32.985816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.081 [2024-07-25 13:52:32.985829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.081 [2024-07-25 13:52:32.988845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.081 [2024-07-25 13:52:32.998315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.081 [2024-07-25 13:52:32.998680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.081 [2024-07-25 13:52:32.998707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.081 [2024-07-25 13:52:32.998723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.081 [2024-07-25 13:52:32.998944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.081 [2024-07-25 13:52:32.999179] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:32.999200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.081 [2024-07-25 13:52:32.999213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.081 [2024-07-25 13:52:33.002172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.081 [2024-07-25 13:52:33.011718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.081 [2024-07-25 13:52:33.012205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.081 [2024-07-25 13:52:33.012234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.081 [2024-07-25 13:52:33.012250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.081 [2024-07-25 13:52:33.012490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.081 [2024-07-25 13:52:33.012690] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:33.012711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.081 [2024-07-25 13:52:33.012724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.081 [2024-07-25 13:52:33.015775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.081 [2024-07-25 13:52:33.025203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.081 [2024-07-25 13:52:33.025604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.081 [2024-07-25 13:52:33.025639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.081 [2024-07-25 13:52:33.025656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.081 [2024-07-25 13:52:33.025902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.081 [2024-07-25 13:52:33.026145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:33.026167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.081 [2024-07-25 13:52:33.026180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.081 [2024-07-25 13:52:33.029249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.081 [2024-07-25 13:52:33.038524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.081 [2024-07-25 13:52:33.038883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.081 [2024-07-25 13:52:33.038911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.081 [2024-07-25 13:52:33.038927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.081 [2024-07-25 13:52:33.039176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.081 [2024-07-25 13:52:33.039409] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:33.039430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.081 [2024-07-25 13:52:33.039444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.081 [2024-07-25 13:52:33.042479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.081 [2024-07-25 13:52:33.051728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.081 [2024-07-25 13:52:33.052173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.081 [2024-07-25 13:52:33.052202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.081 [2024-07-25 13:52:33.052218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.081 [2024-07-25 13:52:33.052448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.081 [2024-07-25 13:52:33.052657] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:33.052676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.081 [2024-07-25 13:52:33.052689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.081 [2024-07-25 13:52:33.055665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.081 [2024-07-25 13:52:33.064904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.081 [2024-07-25 13:52:33.065328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.081 [2024-07-25 13:52:33.065357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.081 [2024-07-25 13:52:33.065373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.081 [2024-07-25 13:52:33.065614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.081 [2024-07-25 13:52:33.065812] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:33.065832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.081 [2024-07-25 13:52:33.065845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.081 [2024-07-25 13:52:33.068796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.081 [2024-07-25 13:52:33.078242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.081 [2024-07-25 13:52:33.078613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.081 [2024-07-25 13:52:33.078641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.081 [2024-07-25 13:52:33.078656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.081 [2024-07-25 13:52:33.078891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.081 [2024-07-25 13:52:33.079128] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:33.079148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.081 [2024-07-25 13:52:33.079161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.081 [2024-07-25 13:52:33.082123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.081 [2024-07-25 13:52:33.091635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.081 [2024-07-25 13:52:33.092066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.081 [2024-07-25 13:52:33.092095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.081 [2024-07-25 13:52:33.092120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.081 [2024-07-25 13:52:33.092362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.081 [2024-07-25 13:52:33.092576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.081 [2024-07-25 13:52:33.092597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.082 [2024-07-25 13:52:33.092610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.082 [2024-07-25 13:52:33.095674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.082 [2024-07-25 13:52:33.105009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.082 [2024-07-25 13:52:33.105359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.082 [2024-07-25 13:52:33.105389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.082 [2024-07-25 13:52:33.105406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.082 [2024-07-25 13:52:33.105621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.082 [2024-07-25 13:52:33.105849] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.082 [2024-07-25 13:52:33.105871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.082 [2024-07-25 13:52:33.105885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.082 [2024-07-25 13:52:33.109209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.342 [2024-07-25 13:52:33.118586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.342 [2024-07-25 13:52:33.119046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.342 [2024-07-25 13:52:33.119082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.342 [2024-07-25 13:52:33.119100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.342 [2024-07-25 13:52:33.119314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.342 [2024-07-25 13:52:33.119557] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.342 [2024-07-25 13:52:33.119579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.342 [2024-07-25 13:52:33.119592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.342 [2024-07-25 13:52:33.122725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.342 [2024-07-25 13:52:33.131931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.342 [2024-07-25 13:52:33.132260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.342 [2024-07-25 13:52:33.132303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.342 [2024-07-25 13:52:33.132319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.132548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.343 [2024-07-25 13:52:33.132758] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.343 [2024-07-25 13:52:33.132778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.343 [2024-07-25 13:52:33.132791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.343 [2024-07-25 13:52:33.135754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.343 [2024-07-25 13:52:33.145252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.343 [2024-07-25 13:52:33.145609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.343 [2024-07-25 13:52:33.145637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.343 [2024-07-25 13:52:33.145652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.145874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.343 [2024-07-25 13:52:33.146106] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.343 [2024-07-25 13:52:33.146127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.343 [2024-07-25 13:52:33.146140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.343 [2024-07-25 13:52:33.149091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.343 [2024-07-25 13:52:33.158629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.343 [2024-07-25 13:52:33.158985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.343 [2024-07-25 13:52:33.159013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.343 [2024-07-25 13:52:33.159034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.159288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.343 [2024-07-25 13:52:33.159532] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.343 [2024-07-25 13:52:33.159552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.343 [2024-07-25 13:52:33.159565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.343 [2024-07-25 13:52:33.162519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.343 [2024-07-25 13:52:33.171953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.343 [2024-07-25 13:52:33.172331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.343 [2024-07-25 13:52:33.172374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.343 [2024-07-25 13:52:33.172390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.172623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.343 [2024-07-25 13:52:33.172816] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.343 [2024-07-25 13:52:33.172837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.343 [2024-07-25 13:52:33.172849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.343 [2024-07-25 13:52:33.175843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.343 [2024-07-25 13:52:33.185245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.343 [2024-07-25 13:52:33.185655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.343 [2024-07-25 13:52:33.185682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.343 [2024-07-25 13:52:33.185698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.185918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.343 [2024-07-25 13:52:33.186171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.343 [2024-07-25 13:52:33.186194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.343 [2024-07-25 13:52:33.186208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.343 [2024-07-25 13:52:33.189181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.343 [2024-07-25 13:52:33.198466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.343 [2024-07-25 13:52:33.198878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.343 [2024-07-25 13:52:33.198905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.343 [2024-07-25 13:52:33.198920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.199186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.343 [2024-07-25 13:52:33.199406] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.343 [2024-07-25 13:52:33.199450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.343 [2024-07-25 13:52:33.199464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.343 [2024-07-25 13:52:33.202415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.343 [2024-07-25 13:52:33.211662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.343 [2024-07-25 13:52:33.212080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.343 [2024-07-25 13:52:33.212124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.343 [2024-07-25 13:52:33.212141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.212384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.343 [2024-07-25 13:52:33.212593] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.343 [2024-07-25 13:52:33.212613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.343 [2024-07-25 13:52:33.212626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.343 [2024-07-25 13:52:33.215619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.343 [2024-07-25 13:52:33.224889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.343 [2024-07-25 13:52:33.225297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.343 [2024-07-25 13:52:33.225326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.343 [2024-07-25 13:52:33.225342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.225578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.343 [2024-07-25 13:52:33.225788] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.343 [2024-07-25 13:52:33.225808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.343 [2024-07-25 13:52:33.225821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.343 [2024-07-25 13:52:33.228982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.343 [2024-07-25 13:52:33.238080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.343 [2024-07-25 13:52:33.238445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.343 [2024-07-25 13:52:33.238473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.343 [2024-07-25 13:52:33.238489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.238724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.343 [2024-07-25 13:52:33.238942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.343 [2024-07-25 13:52:33.238963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.343 [2024-07-25 13:52:33.238977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.343 [2024-07-25 13:52:33.242152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.343 [2024-07-25 13:52:33.251318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.343 [2024-07-25 13:52:33.251719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.343 [2024-07-25 13:52:33.251750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.343 [2024-07-25 13:52:33.251767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.343 [2024-07-25 13:52:33.252008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.252256] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.252279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.344 [2024-07-25 13:52:33.252293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.344 [2024-07-25 13:52:33.255324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.344 [2024-07-25 13:52:33.264660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.344 [2024-07-25 13:52:33.265025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.344 [2024-07-25 13:52:33.265056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.344 [2024-07-25 13:52:33.265100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.344 [2024-07-25 13:52:33.265342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.265553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.265573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.344 [2024-07-25 13:52:33.265587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.344 [2024-07-25 13:52:33.268569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.344 [2024-07-25 13:52:33.278002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.344 [2024-07-25 13:52:33.278424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.344 [2024-07-25 13:52:33.278454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.344 [2024-07-25 13:52:33.278471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.344 [2024-07-25 13:52:33.278692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.278901] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.278921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.344 [2024-07-25 13:52:33.278935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.344 [2024-07-25 13:52:33.281862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.344 [2024-07-25 13:52:33.291410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.344 [2024-07-25 13:52:33.291703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.344 [2024-07-25 13:52:33.291745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.344 [2024-07-25 13:52:33.291761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.344 [2024-07-25 13:52:33.291983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.292223] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.292245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.344 [2024-07-25 13:52:33.292259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.344 [2024-07-25 13:52:33.295229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.344 [2024-07-25 13:52:33.304683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.344 [2024-07-25 13:52:33.305043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.344 [2024-07-25 13:52:33.305095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.344 [2024-07-25 13:52:33.305114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.344 [2024-07-25 13:52:33.305355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.305564] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.305584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.344 [2024-07-25 13:52:33.305597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.344 [2024-07-25 13:52:33.308590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.344 [2024-07-25 13:52:33.318093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.344 [2024-07-25 13:52:33.318429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.344 [2024-07-25 13:52:33.318458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.344 [2024-07-25 13:52:33.318473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.344 [2024-07-25 13:52:33.318710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.318914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.318934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.344 [2024-07-25 13:52:33.318948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.344 [2024-07-25 13:52:33.321937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.344 [2024-07-25 13:52:33.331440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.344 [2024-07-25 13:52:33.331784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.344 [2024-07-25 13:52:33.331812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.344 [2024-07-25 13:52:33.331828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.344 [2024-07-25 13:52:33.332074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.332300] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.332321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.344 [2024-07-25 13:52:33.332355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.344 [2024-07-25 13:52:33.335349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.344 [2024-07-25 13:52:33.344891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.344 [2024-07-25 13:52:33.345232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.344 [2024-07-25 13:52:33.345261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.344 [2024-07-25 13:52:33.345278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.344 [2024-07-25 13:52:33.345519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.345730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.345750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.344 [2024-07-25 13:52:33.345763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.344 [2024-07-25 13:52:33.349076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.344 [2024-07-25 13:52:33.358491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.344 [2024-07-25 13:52:33.358829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.344 [2024-07-25 13:52:33.358858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.344 [2024-07-25 13:52:33.358875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.344 [2024-07-25 13:52:33.359101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.359320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.359342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.344 [2024-07-25 13:52:33.359357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.344 [2024-07-25 13:52:33.362732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.344 [2024-07-25 13:52:33.371755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.344 [2024-07-25 13:52:33.372128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.344 [2024-07-25 13:52:33.372169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.344 [2024-07-25 13:52:33.372186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.344 [2024-07-25 13:52:33.372400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.344 [2024-07-25 13:52:33.372613] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.344 [2024-07-25 13:52:33.372632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.345 [2024-07-25 13:52:33.372644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.345 [2024-07-25 13:52:33.375751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.385164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.385585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.385612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.385627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.385842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.386070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.386091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.386127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.389100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.398419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.398828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.398856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.398871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.399137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.399379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.399400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.399414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.402388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.411654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.412002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.412030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.412046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.412308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.412537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.412558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.412571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.415483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.424835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.425214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.425243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.425258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.425479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.425687] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.425707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.425719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.428632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.438136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.438543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.438571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.438586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.438822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.439026] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.439070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.439086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.441993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.451222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.451599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.451626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.451641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.451875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.452090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.452110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.452122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.454951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.464392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.464701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.464728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.464744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.464961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.465193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.465213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.465226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.468139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.477670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.478012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.478038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.478078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.478313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.478517] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.478536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.478548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.481443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.490797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.491212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.491241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.491257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.491496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.491698] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.491718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.491732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.494598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.503837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.504246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.504274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.504290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.504527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.504730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.504750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.504763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.507690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.516894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.517245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.517278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.517294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.517529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.517731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.517751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.517765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.520646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.530046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.530424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.530452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.530467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.530683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.530888] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.530908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.530920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.533840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.543121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.543469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.543498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.543514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.543749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.543951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.543972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.543984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.546882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.556259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.556574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.556603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.556618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.556836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.557069] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.557090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.557119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.560044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 [2024-07-25 13:52:33.569326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.604 [2024-07-25 13:52:33.569733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.604 [2024-07-25 13:52:33.569761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.604 [2024-07-25 13:52:33.569776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.604 [2024-07-25 13:52:33.570010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.604 [2024-07-25 13:52:33.570243] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.604 [2024-07-25 13:52:33.570264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.604 [2024-07-25 13:52:33.570277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.604 [2024-07-25 13:52:33.573145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 661006 Killed "${NVMF_APP[@]}" "$@" 00:23:36.604 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:23:36.604 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=662040 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 662040 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 662040 ']' 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.605 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:36.605 [2024-07-25 13:52:33.582697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.605 [2024-07-25 13:52:33.583049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.605 [2024-07-25 13:52:33.583101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.605 [2024-07-25 13:52:33.583118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.605 [2024-07-25 13:52:33.583333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.605 [2024-07-25 13:52:33.583551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.605 [2024-07-25 13:52:33.583571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.605 [2024-07-25 13:52:33.583584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.605 [2024-07-25 13:52:33.586702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.605 [2024-07-25 13:52:33.596235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.605 [2024-07-25 13:52:33.596644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.605 [2024-07-25 13:52:33.596672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.605 [2024-07-25 13:52:33.596687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.605 [2024-07-25 13:52:33.596922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.605 [2024-07-25 13:52:33.597163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.605 [2024-07-25 13:52:33.597186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.605 [2024-07-25 13:52:33.597200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.605 [2024-07-25 13:52:33.600271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.605 [2024-07-25 13:52:33.609539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.605 [2024-07-25 13:52:33.609866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.605 [2024-07-25 13:52:33.609893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.605 [2024-07-25 13:52:33.609909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.605 [2024-07-25 13:52:33.610160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.605 [2024-07-25 13:52:33.610372] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.605 [2024-07-25 13:52:33.610409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.605 [2024-07-25 13:52:33.610423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.605 [2024-07-25 13:52:33.613952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.605 [2024-07-25 13:52:33.622902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.605 [2024-07-25 13:52:33.622916] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:36.605 [2024-07-25 13:52:33.622972] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.605 [2024-07-25 13:52:33.623269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.605 [2024-07-25 13:52:33.623296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.605 [2024-07-25 13:52:33.623312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.605 [2024-07-25 13:52:33.623563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.605 [2024-07-25 13:52:33.623778] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.605 [2024-07-25 13:52:33.623798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.605 [2024-07-25 13:52:33.623811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.605 [2024-07-25 13:52:33.626841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.605 [2024-07-25 13:52:33.636478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.605 [2024-07-25 13:52:33.636830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.605 [2024-07-25 13:52:33.636858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.605 [2024-07-25 13:52:33.636873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.605 [2024-07-25 13:52:33.637133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.605 [2024-07-25 13:52:33.637339] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.605 [2024-07-25 13:52:33.637358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.605 [2024-07-25 13:52:33.637386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.640551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.649745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.650102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.650146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.650162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.650404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.650614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.650634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.650647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.653624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.864 [2024-07-25 13:52:33.663165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.663532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.663560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.663576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.663818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.664017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.664037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.664075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.667174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.676526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.676943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.676970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.676986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.677210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.677454] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.677473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.677486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.680437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.689680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.689818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:36.864 [2024-07-25 13:52:33.690065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.690093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.690124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.690353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.690583] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.690602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.690615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.693567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.702889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.703586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.703638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.703661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.703931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.704161] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.704182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.704199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.707153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.716275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.716721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.716764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.716782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.717025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.717263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.717285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.717298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.720292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.729534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.729861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.729889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.729905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.730134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.730350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.730384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.730398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.733393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.742848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.743277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.743307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.743323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.743569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.743763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.743783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.743797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.746794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.756117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.756629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.756667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.756687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.756927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.757165] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.757187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.757204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.760169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.769421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.769800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.769828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.769844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.770070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.770293] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.770315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.770330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.773294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.782686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.783048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.783083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.783101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.783345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.783556] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.783576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.783591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.786574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.796005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.864 [2024-07-25 13:52:33.796027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.864 [2024-07-25 13:52:33.796036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.864 [2024-07-25 13:52:33.796050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.864 [2024-07-25 13:52:33.796085] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.864 [2024-07-25 13:52:33.796096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.864 [2024-07-25 13:52:33.796176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.864 [2024-07-25 13:52:33.796242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:36.864 [2024-07-25 13:52:33.796244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.864 [2024-07-25 13:52:33.796465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.864 [2024-07-25 13:52:33.796502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.864 [2024-07-25 13:52:33.796519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.864 [2024-07-25 13:52:33.796735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.864 [2024-07-25 13:52:33.796955] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.864 [2024-07-25 13:52:33.796976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.864 [2024-07-25 13:52:33.796990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.864 [2024-07-25 13:52:33.800151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.864 [2024-07-25 13:52:33.809444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.865 [2024-07-25 13:52:33.810029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.865 [2024-07-25 13:52:33.810077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.865 [2024-07-25 13:52:33.810099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.865 [2024-07-25 13:52:33.810342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.865 [2024-07-25 13:52:33.810570] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.865 [2024-07-25 13:52:33.810591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.865 [2024-07-25 13:52:33.810609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.865 [2024-07-25 13:52:33.813663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.865 [2024-07-25 13:52:33.822942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.865 [2024-07-25 13:52:33.823484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.865 [2024-07-25 13:52:33.823525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.865 [2024-07-25 13:52:33.823544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.865 [2024-07-25 13:52:33.823777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.865 [2024-07-25 13:52:33.823989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.865 [2024-07-25 13:52:33.824010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.865 [2024-07-25 13:52:33.824028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.865 [2024-07-25 13:52:33.827198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.865 [2024-07-25 13:52:33.836492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.865 [2024-07-25 13:52:33.837021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.865 [2024-07-25 13:52:33.837069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.865 [2024-07-25 13:52:33.837092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.865 [2024-07-25 13:52:33.837333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.865 [2024-07-25 13:52:33.837579] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.865 [2024-07-25 13:52:33.837601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.865 [2024-07-25 13:52:33.837619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.865 [2024-07-25 13:52:33.840755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.865 [2024-07-25 13:52:33.850053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.865 [2024-07-25 13:52:33.850552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.865 [2024-07-25 13:52:33.850589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.865 [2024-07-25 13:52:33.850608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.865 [2024-07-25 13:52:33.850847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.865 [2024-07-25 13:52:33.851083] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.865 [2024-07-25 13:52:33.851106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.865 [2024-07-25 13:52:33.851123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.865 [2024-07-25 13:52:33.854260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.865 [2024-07-25 13:52:33.863545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.865 [2024-07-25 13:52:33.864113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.865 [2024-07-25 13:52:33.864159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.865 [2024-07-25 13:52:33.864181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.865 [2024-07-25 13:52:33.864409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.865 [2024-07-25 13:52:33.864634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.865 [2024-07-25 13:52:33.864658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.865 [2024-07-25 13:52:33.864677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.865 [2024-07-25 13:52:33.867888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.865 [2024-07-25 13:52:33.877179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.865 [2024-07-25 13:52:33.877621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.865 [2024-07-25 13:52:33.877657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.865 [2024-07-25 13:52:33.877677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.865 [2024-07-25 13:52:33.877927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.865 [2024-07-25 13:52:33.878168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.865 [2024-07-25 13:52:33.878192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.865 [2024-07-25 13:52:33.878210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.865 [2024-07-25 13:52:33.881394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:36.865 [2024-07-25 13:52:33.890595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.865 [2024-07-25 13:52:33.890986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.865 [2024-07-25 13:52:33.891015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:36.865 [2024-07-25 13:52:33.891032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:36.865 [2024-07-25 13:52:33.891264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:36.865 [2024-07-25 13:52:33.891505] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:36.865 [2024-07-25 13:52:33.891526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:36.865 [2024-07-25 13:52:33.891539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.865 [2024-07-25 13:52:33.894739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.123 [2024-07-25 13:52:33.904289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.123 [2024-07-25 13:52:33.904631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.123 [2024-07-25 13:52:33.904660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:37.123 [2024-07-25 13:52:33.904677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:37.123 [2024-07-25 13:52:33.904891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:37.123 [2024-07-25 13:52:33.905151] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:37.124 [2024-07-25 13:52:33.905174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:37.124 [2024-07-25 13:52:33.905188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.124 [2024-07-25 13:52:33.908439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:37.124 [2024-07-25 13:52:33.917781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.124 [2024-07-25 13:52:33.918160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.124 [2024-07-25 13:52:33.918189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:37.124 [2024-07-25 13:52:33.918206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:37.124 [2024-07-25 13:52:33.918437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:37.124 [2024-07-25 13:52:33.918659] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:37.124 [2024-07-25 13:52:33.918680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:37.124 [2024-07-25 13:52:33.918694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.124 [2024-07-25 13:52:33.921881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.124 [2024-07-25 13:52:33.931308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.124 [2024-07-25 13:52:33.931717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.124 [2024-07-25 13:52:33.931747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:37.124 [2024-07-25 13:52:33.931763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:37.124 [2024-07-25 13:52:33.932006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:37.124 [2024-07-25 13:52:33.932246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:37.124 [2024-07-25 13:52:33.932269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:37.124 [2024-07-25 13:52:33.932284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:37.124 [2024-07-25 13:52:33.935498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.124 [2024-07-25 13:52:33.937242] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.124 [2024-07-25 13:52:33.944933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.124 [2024-07-25 13:52:33.945306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.124 [2024-07-25 13:52:33.945335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:37.124 [2024-07-25 13:52:33.945351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:37.124 [2024-07-25 13:52:33.945579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:37.124 [2024-07-25 13:52:33.945794] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:37.124 [2024-07-25 13:52:33.945815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:37.124 [2024-07-25 13:52:33.945828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.124 [2024-07-25 13:52:33.949005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.124 [2024-07-25 13:52:33.958329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.124 [2024-07-25 13:52:33.958769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.124 [2024-07-25 13:52:33.958798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:37.124 [2024-07-25 13:52:33.958815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:37.124 [2024-07-25 13:52:33.959057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:37.124 [2024-07-25 13:52:33.959310] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:37.124 [2024-07-25 13:52:33.959332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:37.124 [2024-07-25 13:52:33.959357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:37.124 [2024-07-25 13:52:33.962612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.124 [2024-07-25 13:52:33.971845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.124 [2024-07-25 13:52:33.972252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.124 [2024-07-25 13:52:33.972287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:37.124 [2024-07-25 13:52:33.972307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:37.124 [2024-07-25 13:52:33.972551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:37.124 [2024-07-25 13:52:33.972760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:37.124 [2024-07-25 13:52:33.972782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:37.124 [2024-07-25 13:52:33.972800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.124 [2024-07-25 13:52:33.975960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.124 [2024-07-25 13:52:33.985297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.124 [2024-07-25 13:52:33.985818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.124 [2024-07-25 13:52:33.985859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:37.124 [2024-07-25 13:52:33.985879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:37.124 [2024-07-25 13:52:33.986143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:37.124 [2024-07-25 13:52:33.986384] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:37.124 [2024-07-25 13:52:33.986407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:37.124 [2024-07-25 13:52:33.986441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.124 Malloc0 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:37.124 [2024-07-25 13:52:33.989753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.124 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.125 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.125 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.125 13:52:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:37.125 [2024-07-25 13:52:33.998889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.125 [2024-07-25 13:52:33.999248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:37.125 [2024-07-25 13:52:33.999286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba1ac0 with addr=10.0.0.2, port=4420 00:23:37.125 [2024-07-25 13:52:33.999303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba1ac0 is same with the state(5) to be set 00:23:37.125 [2024-07-25 13:52:33.999532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba1ac0 (9): Bad file descriptor 00:23:37.125 [2024-07-25 13:52:33.999755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:37.125 [2024-07-25 13:52:33.999777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:37.125 [2024-07-25 13:52:33.999790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:37.125 13:52:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.125 13:52:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.125 13:52:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.125 13:52:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:37.125 [2024-07-25 13:52:34.002993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.125 [2024-07-25 13:52:34.006386] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.125 13:52:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.125 13:52:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 661258 00:23:37.125 [2024-07-25 13:52:34.012489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:37.125 [2024-07-25 13:52:34.136223] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:47.175 00:23:47.175 Latency(us) 00:23:47.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.175 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:47.175 Verification LBA range: start 0x0 length 0x4000 00:23:47.175 Nvme1n1 : 15.01 6686.66 26.12 10334.50 0.00 7497.30 555.24 17476.27 00:23:47.175 =================================================================================================================== 00:23:47.175 Total : 6686.66 26.12 10334.50 0.00 7497.30 555.24 17476.27 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.175 rmmod nvme_tcp 00:23:47.175 rmmod nvme_fabrics 00:23:47.175 rmmod nvme_keyring 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 662040 ']' 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 662040 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 662040 ']' 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 662040 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 662040 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 662040' 00:23:47.175 killing process with pid 662040 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 662040 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 662040 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.175 13:52:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.082 00:23:49.082 real 0m22.754s 00:23:49.082 user 1m0.341s 00:23:49.082 sys 0m4.559s 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:49.082 ************************************ 00:23:49.082 END TEST nvmf_bdevperf 00:23:49.082 ************************************ 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.082 ************************************ 00:23:49.082 START TEST nvmf_target_disconnect 00:23:49.082 ************************************ 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:23:49.082 * Looking for test storage... 00:23:49.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.082 13:52:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.082 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:23:49.083 13:52:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:50.986 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:50.986 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.986 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:50.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:50.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.987 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:23:51.246 00:23:51.246 --- 10.0.0.2 ping statistics --- 00:23:51.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.246 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:23:51.246 00:23:51.246 --- 10.0.0.1 ping statistics --- 00:23:51.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.246 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.246 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:51.247 ************************************ 00:23:51.247 START TEST nvmf_target_disconnect_tc1 00:23:51.247 ************************************ 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:51.247 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.247 [2024-07-25 13:52:48.256083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:51.247 [2024-07-25 13:52:48.256153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165e1a0 with addr=10.0.0.2, port=4420 00:23:51.247 [2024-07-25 13:52:48.256191] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:51.247 [2024-07-25 13:52:48.256213] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:51.247 [2024-07-25 13:52:48.256227] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:23:51.247 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:23:51.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:23:51.247 Initializing NVMe Controllers 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:51.247 00:23:51.247 real 0m0.086s 00:23:51.247 user 0m0.041s 00:23:51.247 sys 0m0.045s 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:51.247 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.247 ************************************ 00:23:51.247 END TEST nvmf_target_disconnect_tc1 00:23:51.247 ************************************ 00:23:51.505 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:23:51.505 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:51.505 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.505 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:23:51.505 ************************************ 00:23:51.505 START TEST nvmf_target_disconnect_tc2 00:23:51.505 ************************************ 00:23:51.505 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:23:51.505 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:23:51.505 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:51.505 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=665197 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 665197 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 665197 ']' 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.506 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.506 [2024-07-25 13:52:48.366639] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:51.506 [2024-07-25 13:52:48.366724] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.506 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.506 [2024-07-25 13:52:48.429292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.506 [2024-07-25 13:52:48.530598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.506 [2024-07-25 13:52:48.530655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.506 [2024-07-25 13:52:48.530678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.506 [2024-07-25 13:52:48.530689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.506 [2024-07-25 13:52:48.530699] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.506 [2024-07-25 13:52:48.530784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:51.506 [2024-07-25 13:52:48.530888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:51.506 [2024-07-25 13:52:48.530976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:23:51.506 [2024-07-25 13:52:48.530979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.764 Malloc0 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.764 [2024-07-25 13:52:48.723017] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.764 [2024-07-25 13:52:48.751333] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=665220 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:23:51.764 13:52:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:52.024 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.949 13:52:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 665197 00:23:53.949 13:52:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Write completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 [2024-07-25 13:52:50.777630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.949 Read completed with error (sct=0, sc=8) 00:23:53.949 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 [2024-07-25 13:52:50.777985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Read completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 Write completed with error (sct=0, sc=8) 00:23:53.950 starting I/O failed 00:23:53.950 [2024-07-25 13:52:50.778321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:53.950 [2024-07-25 13:52:50.778526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.778558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.778704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.778731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.778865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.778893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.779048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.779083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.779175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.779207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.779298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.779324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.779434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.779461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.779550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.779577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.779677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.779703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.779815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.779841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.779934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.779961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.780073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.780101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.780206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.780233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.780326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.780353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.950 [2024-07-25 13:52:50.780451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.950 [2024-07-25 13:52:50.780493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.950 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.780610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.780637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.780758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.780784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.780920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.780959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.781091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.781119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.781210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.781236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.781328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.781355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.781465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.781491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.781565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.781591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.781712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.781738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.781821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.781847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.781980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.782020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.782147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.782175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.782269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.782295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.782413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.782440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.782645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.782672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.782780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.782806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.782919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.782951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.783049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.783082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.783206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.783232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.783326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.783352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.783462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.783488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.783599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.783625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.783739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.783767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.783899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.783940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.784092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.784122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.784316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.784343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.784461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.784488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.784576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.784602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.784707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.784735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 [2024-07-25 13:52:50.784881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.951 [2024-07-25 13:52:50.784907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.951 qpair failed and we were unable to recover it. 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Read completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Write completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.951 Write completed with error (sct=0, sc=8) 00:23:53.951 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Write completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Write completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Write completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Write completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Write completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Read completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Write completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Write completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Write completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 Write completed with error (sct=0, sc=8) 00:23:53.952 starting I/O failed 00:23:53.952 [2024-07-25 13:52:50.785224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:53.952 [2024-07-25 13:52:50.785315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.785357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.785483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.785509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.785626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.785653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.785772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.785799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.785906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.785933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.786094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.786134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.786245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.786282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.786384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.786424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.786548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.786576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.786731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.786758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.786872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.786899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.787012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.787039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.787138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.787167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.787261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.787290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.787405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.787432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.787521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.787547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.787667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.787693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.787842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.787869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.787986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.788012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.788099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.788126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.788210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.788237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.788319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.788346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.788460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.788486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.788572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.788599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.788710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.788736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.788823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.788850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.788997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.789028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.952 qpair failed and we were unable to recover it. 00:23:53.952 [2024-07-25 13:52:50.789153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.952 [2024-07-25 13:52:50.789181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.789311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.789350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.789469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.789497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.789615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.789642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.789763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.789789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.789902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.789928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.790065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.790105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.790249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.790289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.790442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.790470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.790589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.790615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.790734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.790761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.790878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.790904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.791003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.791030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.791136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.791163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.791249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.791275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.791392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.791417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.791505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.791532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.791629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.791655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.791780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.791807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.791952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.791986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.792106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.792133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.792224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.792250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.792361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.792387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.792465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.792492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.792607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.792633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.792716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.792742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.792861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.792892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.792986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.793013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.793183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.793223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.793372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.793401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.793517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.793545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.793623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.793650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.793775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.793802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.793894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.793921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.794014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.953 [2024-07-25 13:52:50.794042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.953 qpair failed and we were unable to recover it. 00:23:53.953 [2024-07-25 13:52:50.794169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.794195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.794319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.794345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.794436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.794462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.794601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.794627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.794745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.794786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.794912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.794952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.795095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.795124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.795218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.795243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.795366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.795392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.795534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.795560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.795674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.795700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.795798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.795832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.795950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.795976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.796088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.796114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.796197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.796223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.796315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.796341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.796453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.796479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.796571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.796597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.796694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.796732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.796881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.796908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.796989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.797015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.797121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.797147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.797267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.797295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.797415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.797441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.797565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.797591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.797707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.797732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.797859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.954 [2024-07-25 13:52:50.797897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.954 qpair failed and we were unable to recover it. 00:23:53.954 [2024-07-25 13:52:50.797988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.798015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.798137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.798164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.798259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.798286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.798380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.798406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.798547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.798574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.798666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.798703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.798847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.798873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.799018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.799046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.799169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.799195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.799340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.799366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.799506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.799531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.799650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.799676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.799768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.799794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.799910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.799936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.800084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.800111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.800229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.800255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.800372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.800398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.800570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.800598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.800710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.800736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.800874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.800900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.801016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.801044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.801135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.801161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.801253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.801279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.801364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.801390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.801474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.801504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.801616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.801641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.801750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.801776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.801900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.801939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.802071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.802099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.802211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.802238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.802333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.802359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.802438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.802464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.802576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.802601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.802713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.802739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.955 [2024-07-25 13:52:50.802870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.955 [2024-07-25 13:52:50.802909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.955 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.803037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.803070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.803187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.803213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.803325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.803352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.803475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.803501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.803617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.803643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.803752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.803778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.803876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.803914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.804007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.804034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.804135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.804160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.804280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.804305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.804394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.804419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.804507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.804532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.804647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.804673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.804761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.804786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.804912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.804950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.805074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.805101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.805203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.805240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.805360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.805386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.805503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.805529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.805618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.805643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.805785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.805812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.805934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.805961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.806057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.806089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.806187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.806212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.806324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.806350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.806490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.806515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.806599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.806624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.806735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.806760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.806873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.806901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.807023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.807055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.807190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.807215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.807313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.807338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.807446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.807471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.807553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.956 [2024-07-25 13:52:50.807578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.956 qpair failed and we were unable to recover it. 00:23:53.956 [2024-07-25 13:52:50.807694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.807720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.807847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.807886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.808035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.808075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.808197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.808222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.808301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.808326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.808413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.808439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.808527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.808554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.808662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.808686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.808771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.808796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.808948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.808974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.809101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.809128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.809250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.809275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.809385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.809409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.809497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.809522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.809673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.809726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.809840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.809867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.810011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.810037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.810164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.810189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.810300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.810325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.810440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.810466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.810609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.810634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.810754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.810780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.810893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.810924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.811047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.811092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.811217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.811244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.811330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.811356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.811551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.811577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.811670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.811696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.811789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.811815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.811943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.811970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.812087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.812115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.812204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.812242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.812391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.812418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.812593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.812620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.957 [2024-07-25 13:52:50.812729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.957 [2024-07-25 13:52:50.812755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.957 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.812841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.812868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.812992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.813019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.813127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.813165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.813255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.813282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.813404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.813430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.813522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.813547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.813642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.813668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.813811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.813837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.813956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.813982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.814082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.814110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.814256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.814282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.814427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.814453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.814569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.814594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.814685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.814711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.814807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.814845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.814990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.815018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.815136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.815162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.815253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.815279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.815400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.815425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.815504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.815529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.815651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.815676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.815763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.815788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.815878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.815903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.816015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.816040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.816158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.816184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.816271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.816296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.816373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.816398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.816516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.816542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.816662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.816688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.816799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.816824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.816939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.816964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.817074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.817113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.817237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.817264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.817362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.817391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.958 [2024-07-25 13:52:50.817538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.958 [2024-07-25 13:52:50.817564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.958 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.817677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.817702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.817815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.817840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.817932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.817958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.818044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.818076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.818184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.818210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.818291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.818317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.818417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.818455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.818590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.818616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.818732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.818759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.818873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.818899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.819016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.819042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.819193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.819218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.819307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.819333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.819471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.819497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.819609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.819635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.819724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.819750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.819867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.819895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.819992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.820018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.820147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.820174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.820293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.820328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.820442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.820468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.820554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.820581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.820731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.820757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.820843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.820868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.820983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.821010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.821158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.821184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.821272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.821297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.821409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.821435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.821546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.821601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.959 qpair failed and we were unable to recover it. 00:23:53.959 [2024-07-25 13:52:50.821712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.959 [2024-07-25 13:52:50.821737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.821818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.821845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.821934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.821959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.822047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.822079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.822172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.822197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.822274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.822299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.822408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.822433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.822580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.822605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.822687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.822712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.822830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.822855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.823000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.823028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.823164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.823192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.823279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.823305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.823413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.823439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.823547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.823573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.823699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.823737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.823866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.823892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.823987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.824016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.824138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.824163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.824246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.824271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.824378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.824403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.824483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.824508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.824646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.824671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.824783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.824808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.824927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.824955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.825073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.825099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.825192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.825218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.825355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.825381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.825490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.825516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.825628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.825653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.825752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.825781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.825898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.825925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.826078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.826115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.826192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.826218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.960 [2024-07-25 13:52:50.826312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.960 [2024-07-25 13:52:50.826352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.960 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.826467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.826494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.826633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.826660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.826747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.826772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.826912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.826939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.827032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.827066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.827190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.827218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.827336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.827365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.827467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.827533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.827760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.827813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.827953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.827986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.828109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.828135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.828252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.828279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.828403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.828430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.828542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.828568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.828709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.828761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.828872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.828897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.828978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.829003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.829127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.829152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.829234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.829259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.829353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.829377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.829452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.829477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.829594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.829619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.829737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.829765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.829888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.829914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.830070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.830123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.830249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.830278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.830425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.830453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.830569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.830595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.830710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.830736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.830861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.830899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.830998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.831024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.831133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.831159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.831266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.831293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.831392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.831417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.961 qpair failed and we were unable to recover it. 00:23:53.961 [2024-07-25 13:52:50.831531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.961 [2024-07-25 13:52:50.831556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.831696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.831722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.831850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.831896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.832012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.832040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.832155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.832184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.832306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.832343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.832458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.832483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.832567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.832594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.832731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.832757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.832849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.832876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.832959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.832986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.833106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.833133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.833218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.833244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.833333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.833359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.833535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.833582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.833671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.833701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.833823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.833849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.833966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.833992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.834116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.834143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.834227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.834254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.834371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.834405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.834518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.834544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.834631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.834657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.834771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.834797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.834893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.834919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.835050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.835083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.835212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.835239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.835321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.835354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.835477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.835504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.835623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.835649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.835789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.835817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.835929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.835955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.836074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.836101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.836229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.836258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.836335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.962 [2024-07-25 13:52:50.836373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.962 qpair failed and we were unable to recover it. 00:23:53.962 [2024-07-25 13:52:50.836464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.836489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.836570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.836595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.836748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.836773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.836893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.836919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.837009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.837036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.837181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.837220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.837374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.837403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.837543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.837574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.837658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.837684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.837764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.837789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.837901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.837926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.838036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.838070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.838192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.838217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.838302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.838339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.838458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.838483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.838562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.838587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.838705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.838732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.838861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.838888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.839006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.839033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.839149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.839176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.839290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.839317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.839473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.839501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.839619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.839646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.839738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.839766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.839903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.839942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.840064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.840091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.840182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.840210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.840321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.840348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.840490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.840518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.840685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.840743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.840836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.840860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.840972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.840997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.841115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.841141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.841223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.841249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.963 [2024-07-25 13:52:50.841364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.963 [2024-07-25 13:52:50.841406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.963 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.841524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.841550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.841693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.841721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.841832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.841858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.841945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.841970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.842082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.842119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.842229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.842257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.842357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.842384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.842527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.842555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.842744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.842808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.842953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.842981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.843130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.843157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.843245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.843271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.843396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.843423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.843580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.843607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.843687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.843713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.843825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.843849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.843926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.843951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.844071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.844098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.844221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.844248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.844399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.844427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.844537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.844562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.844649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.844675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.844760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.844786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.844910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.844937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.845047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.845079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.845196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.845222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.845348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.845379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.845465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.845490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.845575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.845599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.964 qpair failed and we were unable to recover it. 00:23:53.964 [2024-07-25 13:52:50.845682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.964 [2024-07-25 13:52:50.845707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.845807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.845845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.845966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.845993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.846136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.846162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.846279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.846306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.846426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.846451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.846594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.846622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.846733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.846758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.846867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.846908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.847009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.847047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.847160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.847187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.847301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.847337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.847427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.847451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.847538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.847564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.847802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.847829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.847971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.848003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.848128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.848169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.848266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.848295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.848417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.848446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.848590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.848617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.848817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.848873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.848969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.848995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.849085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.849123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.849213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.849241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.849409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.849450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.849629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.849680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.849904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.849953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.850077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.850110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.850222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.850247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.850338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.850365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.850453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.850479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.850619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.850645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.850753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.850778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.850891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.850918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.965 [2024-07-25 13:52:50.851036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.965 [2024-07-25 13:52:50.851070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.965 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.851165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.851190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.851271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.851297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.851395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.851426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.851545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.851573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.851696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.851723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.851821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.851849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.851963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.851989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.852078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.852108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.852223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.852251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.852367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.852395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.852534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.852562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.852682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.852710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.852855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.852884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.853005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.853034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.853193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.853220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.853328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.853356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.853502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.853530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.853622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.853648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.853815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.853871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.854015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.854043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.854165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.854192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.854313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.854340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.854456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.854483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.854598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.854625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.854719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.854745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.854827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.854853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.854970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.854995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.855100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.855126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.855215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.855242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.855369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.855409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.855500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.855527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.855640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.855668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.855751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.855777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.855871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.966 [2024-07-25 13:52:50.855897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.966 qpair failed and we were unable to recover it. 00:23:53.966 [2024-07-25 13:52:50.856014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.856039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.856148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.856177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.856263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.856290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.856375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.856401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.856551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.856579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.856689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.856715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.856801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.856826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.856943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.856972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.857093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.857125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.857246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.857274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.857370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.857396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.857538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.857565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.857692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.857733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.857855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.857882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.858017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.858066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.858168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.858194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.858280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.858305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.858388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.858412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.858566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.858593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.858710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.858738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.858944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.858972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.859090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.859117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.859220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.859247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.859332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.859358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.859499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.859526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.859697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.859725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.859861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.859887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.860003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.860029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.860163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.860192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.860281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.860306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.860382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.860407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.860563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.860616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.860698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.860723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.860876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.860916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.967 [2024-07-25 13:52:50.861021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.967 [2024-07-25 13:52:50.861049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.967 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.861206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.861239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.861350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.861414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.861598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.861655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.861793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.861821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.861958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.861986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.862077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.862105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.862199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.862226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.862340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.862367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.862525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.862576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.862690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.862717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.862829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.862855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.863001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.863029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.863164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.863204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.863304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.863333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.863430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.863462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.863579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.863607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.863765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.863817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.863930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.863957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.864091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.864131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.864227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.864253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.864366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.864425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.864650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.864676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.864757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.864782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.864903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.864931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.865048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.865083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.865184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.865215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.865311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.865339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.865464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.865493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.865618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.865645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.865766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.865794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.865941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.865969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.866081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.866108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.866227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.866254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.866338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.968 [2024-07-25 13:52:50.866364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.968 qpair failed and we were unable to recover it. 00:23:53.968 [2024-07-25 13:52:50.866502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.866529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.866605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.866630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.866772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.866799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.866904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.866945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.867108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.867149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.867245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.867272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.867389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.867421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.867612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.867640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.867817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.867875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.868020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.868047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.868184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.868225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.868347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.868376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.868495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.868523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.868618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.868644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.868765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.868792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.868912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.868939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.869053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.869086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.869175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.869201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.869340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.869367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.869479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.869505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.869604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.869631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.869760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.869801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.869901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.869930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.870028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.870075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.870201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.870229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.870317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.870342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.870482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.870509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.870682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.870738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.870922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.870979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.969 [2024-07-25 13:52:50.871093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.969 [2024-07-25 13:52:50.871118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.969 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.871205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.871236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.871391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.871450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.871639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.871668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.871816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.871848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.871986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.872026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.872138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.872166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.872283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.872312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.872430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.872458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.872648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.872676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.872762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.872788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.872903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.872930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.873069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.873097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.873180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.873206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.873321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.873347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.873467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.873494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.873580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.873605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.873714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.873741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.873846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.873886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.874033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.874081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.874175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.874201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.874293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.874319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.874434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.874461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.874582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.874610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.874728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.874755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.874849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.874878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.874990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.875019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.875171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.875200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.875320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.875349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.875465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.875493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.875636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.875664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.875787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.875815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.875932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.875961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.876050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.876081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.876175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.970 [2024-07-25 13:52:50.876200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.970 qpair failed and we were unable to recover it. 00:23:53.970 [2024-07-25 13:52:50.876337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.876365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.876449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.876474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.876633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.876688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.876801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.876828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.876921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.876959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.877081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.877110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.877251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.877279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.877367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.877393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.877545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.877597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.877715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.877747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.877862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.877890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.878046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.878095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.878191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.878216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.878331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.878358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.878478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.878504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.878590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.878615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.878749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.878812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.878924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.878950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.879068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.879094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.879177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.879203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.879291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.879316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.879431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.879457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.879540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.879565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.879710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.879739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.879859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.879889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.880004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.880031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.880186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.880213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.880296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.880322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.880437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.880466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.880572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.880599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.880690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.880716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.880859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.880886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.881030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.881077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.881208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.881234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.971 qpair failed and we were unable to recover it. 00:23:53.971 [2024-07-25 13:52:50.881351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.971 [2024-07-25 13:52:50.881378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.881462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.881487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.881601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.881632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.881752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.881779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.881887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.881913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.882032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.882069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.882188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.882215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.882309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.882335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.882445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.882472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.882548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.882572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.882682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.882708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.882824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.882851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.882982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.883022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.883118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.883145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.883255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.883282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.883398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.883426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.883549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.883577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.883710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.883751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.883881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.883910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.884020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.884047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.884171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.884199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.884315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.884342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.884422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.884447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.884632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.884691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.884779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.884804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.884914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.884941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.885025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.885052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.885150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.885178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.885291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.885330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.885449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.885482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.885602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.885630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.972 [2024-07-25 13:52:50.885716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.972 [2024-07-25 13:52:50.885741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.972 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.885822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.885849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.885944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.885971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.886112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.886139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.886253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.886280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.886363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.886388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.886526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.886552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.886664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.886691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.886773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.886798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.886937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.886964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.887073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.887100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.887195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.887220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.887335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.887362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.887477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.887505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.887621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.887648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.887759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.887786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.887871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.887895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.887984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.888009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.888127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.888167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.888265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.888293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.888389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.888415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.888508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.888536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.888657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.888685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.888764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.888790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.888881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.888912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.889029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.889067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.889202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.889242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.889359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.889387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.889478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.889506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.889623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.889650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.889763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.889791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.973 [2024-07-25 13:52:50.889884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.973 [2024-07-25 13:52:50.889918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.973 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.890037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.890071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.890197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.890225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.890317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.890342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.890421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.890446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.890565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.890592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.890769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.890825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.890905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.890929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.891045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.891079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.891198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.891224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.891311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.891336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.891442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.891469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.891581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.891607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.891693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.891718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.891825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.891851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.891987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.892013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.892150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.892190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.892317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.892345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.892461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.892489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.892605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.892632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.892744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.892771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.892874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.892914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.893072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.893100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.893209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.893236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.893349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.893376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.893515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.893542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.893660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.893687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.893773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.893798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.893910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.893936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.894069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.894110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.894258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.894287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.894407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.894435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.894529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.894555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.974 [2024-07-25 13:52:50.894641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.974 [2024-07-25 13:52:50.894668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.974 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.894751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.894777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.894922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.894949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.895094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.895123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.895232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.895272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.895401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.895430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.895522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.895547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.895631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.895658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.895750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.895782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.895922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.895950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.896069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.896097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.896195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.896225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.896341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.896368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.896490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.896517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.896695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.896756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.896857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.896885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.897002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.897031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.897173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.897201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.897321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.897347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.897490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.897517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.897631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.897658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.897779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.897807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.897935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.897975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.898130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.898158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.898247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.975 [2024-07-25 13:52:50.898273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.975 qpair failed and we were unable to recover it. 00:23:53.975 [2024-07-25 13:52:50.898357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.898383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.898501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.898528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.898619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.898645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.898758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.898789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.898887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.898917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.899028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.899055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.899161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.899186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.899320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.899347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.899432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.899457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.899574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.899600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.899713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.899740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.899860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.899887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.900032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.900069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.900180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.900207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.900302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.900329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.900413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.900438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.900552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.900579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.900669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.900697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.900831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.900872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.900986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.901015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.901169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.901197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.901284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.901311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.901430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.901456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.901641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.901694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.901808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.901835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.901955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.901986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.902125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.902164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.902284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.902312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.902456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.902483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.902622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.902649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.902774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.902802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.902895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.902923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.903083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.903124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.976 [2024-07-25 13:52:50.903273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.976 [2024-07-25 13:52:50.903302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.976 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.903423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.903451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.903545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.903573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.903654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.903681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.903844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.903884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.903989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.904030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.904163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.904195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.904292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.904320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.904434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.904461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.904550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.904575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.904690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.904717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.904818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.904859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.904990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.905031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.905133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.905162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.905283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.905310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.905425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.905453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.905630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.905657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.905768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.905795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.905903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.905930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.906088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.906129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.906219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.906251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.906367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.906394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.906510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.906538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.906657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.906685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.906812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.906840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.906958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.906986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.907112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.907140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.907261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.907288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.907402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.907430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.907606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.977 [2024-07-25 13:52:50.907667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.977 qpair failed and we were unable to recover it. 00:23:53.977 [2024-07-25 13:52:50.907779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.907806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.907924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.907951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.908050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.908098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.908241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.908280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.908397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.908424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.908604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.908631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.908851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.908904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.909045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.909159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.909255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.909282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.909388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.909415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.909533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.909560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.909738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.909791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.909903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.909930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.910039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.910073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.910158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.910183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.910258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.910285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.910397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.910423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.910504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.910529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.910645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.910671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.910785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.910811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.910957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.910987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.911112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.911140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.911252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.911280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.911375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.911401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.911488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.911517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.911616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.911643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.911738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.911766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.911872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.911898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.912042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.912083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.912177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.912205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.912292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.912317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.912392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.978 [2024-07-25 13:52:50.912417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.978 qpair failed and we were unable to recover it. 00:23:53.978 [2024-07-25 13:52:50.912500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.912527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.912626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.912665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.912789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.912822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.912969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.912996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.913114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.913141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.913256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.913283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.913375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.913401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.913493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.913521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.913636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.913663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.913780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.913807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.913923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.913950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.914036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.914068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.914179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.914205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.914297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.914323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.914461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.914519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.914638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.914667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.914787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.914815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.914898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.914924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.915070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.915098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.915191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.915218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.915306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.915334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.915415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.915441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.915555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.915582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.915721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.915748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.915888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.915916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.916007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.916032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.916167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.916208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.916326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.916355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.916449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.916477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.916632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.916687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.916837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.916866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.979 qpair failed and we were unable to recover it. 00:23:53.979 [2024-07-25 13:52:50.916981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.979 [2024-07-25 13:52:50.917008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.917097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.917122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.917230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.917257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.917368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.917395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.917627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.917686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.917829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.917856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.917946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.917973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.918092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.918122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.918211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.918237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.918350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.918378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.918467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.918493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.918644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.918689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.918836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.918864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.918955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.918982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.919073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.919099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.919215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.919242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.919325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.919350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.919429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.919453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.919564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.919590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.919668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.919693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.919778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.919803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.919912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.919949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.920070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.920095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.920234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.920262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.920356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.920395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.920517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.920545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.980 [2024-07-25 13:52:50.920687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.980 [2024-07-25 13:52:50.920714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.980 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.920836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.920863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.920972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.920999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.921095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.921121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.921234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.921261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.921334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.921359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.921465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.921491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.921576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.921601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.921706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.921747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.921870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.921898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.922017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.922044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.922172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.922198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.922283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.922313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.922418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.922445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.922527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.922551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.922662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.922688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.922800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.922826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.922944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.922973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.923055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.923086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.923181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.923208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.923299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.923327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.923444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.923470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.923585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.923612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.923728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.923755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.923861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.923902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.923997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.924026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.924166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.924193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.924314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.924340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.924434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.924459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.924536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.924561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.924718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.924768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.924882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.924908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.925020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.925049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.925170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.925198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.925343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.981 [2024-07-25 13:52:50.925371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.981 qpair failed and we were unable to recover it. 00:23:53.981 [2024-07-25 13:52:50.925600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.925651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.925875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.925930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.926048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.926081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.926170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.926197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.926294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.926326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.926468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.926495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.926611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.926638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.926751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.926780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.926907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.926947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.927052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.927099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.927222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.927251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.927372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.927400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.927484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.927510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.927591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.927619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.927763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.927792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.927932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.927973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.928136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.928176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.928267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.928295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.928379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.928404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.928500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.928525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.928623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.928651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.928767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.928795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.928912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.928940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.929064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.929092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.929209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.929236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.929329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.929356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.929463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.929491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.929605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.929633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.929720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.929746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.929890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.929918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.930011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.930039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.930157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.930198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.930321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.930350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.930465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.930492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.982 [2024-07-25 13:52:50.930609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.982 [2024-07-25 13:52:50.930636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.982 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.930770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.930810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.930932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.930960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.931073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.931102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.931217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.931245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.931360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.931388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.931505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.931532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.931649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.931676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.931823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.931850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.931986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.932013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.932117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.932153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.932298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.932326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.932470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.932497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.932614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.932641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.932755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.932783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.932909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.932949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.933049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.933085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.933229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.933257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.933410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.933437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.933526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.933554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.933646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.933685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.933860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.933915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.934005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.934032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.934180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.934207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.934292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.934318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.934396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.934422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.934530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.934556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.934670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.934697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.934818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.934857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.934977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.935005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.935091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.935117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.935202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.935230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.935318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.935345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.935432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.983 [2024-07-25 13:52:50.935460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.983 qpair failed and we were unable to recover it. 00:23:53.983 [2024-07-25 13:52:50.935540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.935566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.935646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.935672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.935774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.935814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.935903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.935931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.936031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.936078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.936177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.936206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.936299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.936325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.936411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.936437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.936526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.936553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.936720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.936773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.936887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.936913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.937049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.937085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.937169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.937196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.937333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.937359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.937435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.937460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.937573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.937599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.937681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.937706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.937793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.937822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.937946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.937974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.938094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.938122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.938238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.938265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.938357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.938385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.938507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.938535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.938622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.938648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.938791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.938818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.938901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.938927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.939070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.939097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.939226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.939252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.939340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.939367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.939482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.939543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.939699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.939739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.939841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.939869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.939968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.939995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.940081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.940108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.984 qpair failed and we were unable to recover it. 00:23:53.984 [2024-07-25 13:52:50.940185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.984 [2024-07-25 13:52:50.940212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.940352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.940379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.940462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.940490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.940636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.940665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.940811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.940839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.940956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.940984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.941071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.941096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.941207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.941233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.941319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.941345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.941487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.941539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.941636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.941662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.941783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.941811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.941933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.941961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.942102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.942130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.942212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.942237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.942328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.942356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.942473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.942500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.942644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.942671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.942802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.942831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.942952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.942979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.943097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.943124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.943236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.943263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.943381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.943408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.943497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.943523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.943639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.943666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.943763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.943789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.943900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.943926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.944034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.944066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.944185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.944212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.944348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.944374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.985 [2024-07-25 13:52:50.944461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.985 [2024-07-25 13:52:50.944488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.985 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.944601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.944629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.944716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.944743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.944835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.944862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.944972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.944999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.945085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.945111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.945203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.945234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.945333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.945360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.945474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.945502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.945592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.945619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.945713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.945754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.945913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.945941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.946070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.946098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.946189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.946213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.946439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.946490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.946670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.946732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.946876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.946904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.946986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.947012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.947130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.947159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.947276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.947303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.947447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.947474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.947564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.947591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.947706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.947733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.947814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.947840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.947953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.947981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.948099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.948128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.948249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.948276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.948364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.948389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.948506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.948533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.948625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.948652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.948761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.948788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.948873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.948902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.949019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.949047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.986 [2024-07-25 13:52:50.949176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.986 [2024-07-25 13:52:50.949205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.986 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.949319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.949346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.949465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.949492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.949636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.949663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.949742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.949768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.949880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.949906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.950020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.950048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.950175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.950202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.950339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.950366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.950456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.950483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.950579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.950606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.950744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.950770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.950864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.950892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.951008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.951039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.951144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.951184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.951338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.951366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.951509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.951569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.951748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.951802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.951912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.951950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.952083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.952110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.952230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.952256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.952372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.952408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.952532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.952558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.952645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.952672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.952787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.952823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.952914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.952939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.953068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.953109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.953217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.953246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.953388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.953415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.953530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.953558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.953670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.953697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.953815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.953842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.953956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.953984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.954144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.954184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.954333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.987 [2024-07-25 13:52:50.954363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.987 qpair failed and we were unable to recover it. 00:23:53.987 [2024-07-25 13:52:50.954477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.954505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.954591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.954617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.954755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.954783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.954899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.954926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.955039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.955075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.955196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.955228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.955344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.955371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.955593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.955644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.955818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.955874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.955995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.956023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.956116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.956143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.956263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.956290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.956459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.956512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.956716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.956772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.956912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.956939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.957070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.957098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.957193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.957220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.957395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.957440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.957658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.957713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.957915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.957942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.958033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.958065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.958164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.958190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.958281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.958308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.958465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.958523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.958657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.958709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.958793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.958819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.958971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.958997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.959124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.959153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.959305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.959332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.959448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.959475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.959667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.959724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.988 [2024-07-25 13:52:50.959842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.988 [2024-07-25 13:52:50.959870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.988 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.959977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.960010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.960102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.960128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.960246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.960272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.960362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.960388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.960496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.960522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.960633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.960659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.960744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.960770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.960886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.960912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.960994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.961019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.961141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.961168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.961242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.961267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.961397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.961437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.961557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.961586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.961705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.961733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.961857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.961885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.961976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.962004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.962126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.962154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.962299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.962327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.962446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.962473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.962587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.962613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.962740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.962766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.962853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.962878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.962974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.963015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:53.989 [2024-07-25 13:52:50.963167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.989 [2024-07-25 13:52:50.963196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:53.989 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.963367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.963423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.963652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.963705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.963926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.963977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.964093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.964125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.964223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.964248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.964370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.964398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.964511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.964547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.964663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.964690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.964847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.964877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.964991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.965018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.965142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.965170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.965261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.965288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.965428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.965454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.965534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.965558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.965732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.965802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.965910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.965937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.966054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.966088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.966211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.966238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.966366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.966419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.966508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.966535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.966630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.966659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.966777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.966804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.966900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.966940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.967036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.967073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.967196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.967224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.967360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.967413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.967595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.967623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.272 [2024-07-25 13:52:50.967763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.272 [2024-07-25 13:52:50.967790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.272 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.967904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.967931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.968014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.968039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.968211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.968251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.968374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.968402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.968520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.968547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.968629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.968653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.968768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.968795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.968905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.968931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.969005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.969030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.969118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.969146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.969262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.969288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.969374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.969401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.969492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.969518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.969624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.969651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.969767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.969797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.969886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.969913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.970033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.970068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.970187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.970215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.970335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.970362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.970481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.970508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.970624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.970652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.970784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.970823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.970920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.970948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.971067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.971095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.971210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.971237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.971372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.971438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.971642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.971668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.971779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.971807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.971904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.971943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.972075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.972104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.972245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.972272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.972478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.972532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.972750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.972804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.972921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.273 [2024-07-25 13:52:50.972948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.273 qpair failed and we were unable to recover it. 00:23:54.273 [2024-07-25 13:52:50.973037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.973071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.973193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.973220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.973340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.973399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.973628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.973682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.973860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.973923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.974037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.974072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.974159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.974188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.974294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.974333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.974451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.974485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.974570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.974597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.974774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.974834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.974980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.975007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.975146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.975173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.975289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.975317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.975401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.975428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.975541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.975567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.975704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.975730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.975815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.975841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.975958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.975986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.976105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.976134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.976258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.976285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.976373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.976398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.976513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.976575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.976682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.976709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.976830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.976858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.976990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.977030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.977172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.977212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.977372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.977400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.977485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.977512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.977687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.977739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.977857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.977885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.978043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.978090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.978227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.978266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.274 [2024-07-25 13:52:50.978398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.274 [2024-07-25 13:52:50.978430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.274 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.978682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.978749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.978953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.978981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.979105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.979132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.979238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.979265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.979474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.979538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.979819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.979883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.980094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.980121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.980233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.980259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.980398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.980425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.980547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.980625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.980844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.980909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.981086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.981113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.981234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.981261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.981382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.981409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.981496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.981527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.981643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.981669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.981828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.981892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.982087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.982113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.982228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.982254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.982346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.982419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.982699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.982763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.982928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.982998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.983113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.983141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.983248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.983275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.983359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.983384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.983558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.983624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.983850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.983914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.984075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.984116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.984223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.984262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.984370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.984409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.984531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.984569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.984732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.984785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.984897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.984924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.985010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.275 [2024-07-25 13:52:50.985034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.275 qpair failed and we were unable to recover it. 00:23:54.275 [2024-07-25 13:52:50.985125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.985150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.985235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.985261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.985375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.985401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.985510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.985536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.985652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.985680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.985796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.985825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.985979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.986020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.986143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.986180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.986298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.986327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.986477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.986503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.986611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.986638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.986719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.986744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.986831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.986859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.986975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.987002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.987132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.987161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.987254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.987280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.987368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.987395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.987492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.987518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.987629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.987687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.987800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.987826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.987942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.987969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.988097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.988124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.988247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.988275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.988398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.988425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.988537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.988564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.988684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.988710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.988904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.988970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.989185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.989212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.989301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.989328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.989580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.989645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.989850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.989916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.990129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.990156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.990269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.990296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.990409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.276 [2024-07-25 13:52:50.990436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.276 qpair failed and we were unable to recover it. 00:23:54.276 [2024-07-25 13:52:50.990689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.990754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.991026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.991122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.991262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.991288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.991401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.991428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.991512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.991537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.991697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.991755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.991983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.992009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.992097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.992122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.992217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.992244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.992356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.992382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.992464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.992489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.992601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.992628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.992847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.992910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.993086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.993117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.993211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.993238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.993325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.993352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.993464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.993491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.993629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.993655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.993885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.993950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.994141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.994168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.994277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.994304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.994379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.994405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.994484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.994511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.994601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.994628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.994768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.994835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.995076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.995103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.995245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.277 [2024-07-25 13:52:50.995272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.277 qpair failed and we were unable to recover it. 00:23:54.277 [2024-07-25 13:52:50.995446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.995511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.995733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.995803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.996029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.996109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.996223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.996250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.996417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.996482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.996732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.996797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.997084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.997148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.997270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.997296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.997415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.997440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.997554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.997579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.997858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.997923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.998125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.998152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.998265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.998291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.998412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.998439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.998618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.998645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.998785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.998812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.998997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.999023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.999153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.999180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.999278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.999303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.999422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.999448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.999590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.999665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:50.999905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:50.999969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.000202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.000229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.000335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.000362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.000442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.000467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.000646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.000709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.000907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.000938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.001054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.001085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.001204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.001232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.001323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.001350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.001440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.001465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.001577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.001604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.001731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.001776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.001959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.002022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.278 qpair failed and we were unable to recover it. 00:23:54.278 [2024-07-25 13:52:51.002207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.278 [2024-07-25 13:52:51.002234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.002314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.002339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.002461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.002488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.002576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.002601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.002690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.002716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.002874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.002938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.003142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.003169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.003279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.003306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.003417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.003444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.003555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.003581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.003668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.003693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.003856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.003922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.004249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.004276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.004468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.004494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.004696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.004760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.004959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.005025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.005259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.005326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.005591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.005655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.005965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.006037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.006352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.006417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.006628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.006691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.006912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.006975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.007286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.007359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.007611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.007675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.007924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.007991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.008310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.008385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.008630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.008696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.008986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.009051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.009385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.009449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.009760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.009825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.010045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.010125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.010414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.010478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.010765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.010839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.011138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.279 [2024-07-25 13:52:51.011205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.279 qpair failed and we were unable to recover it. 00:23:54.279 [2024-07-25 13:52:51.011452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.011519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.011761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.011827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.012088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.012155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.012428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.012493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.012744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.012808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.013098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.013164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.013363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.013428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.013675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.013741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.014030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.014110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.014408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.014473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.014765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.014830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.015093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.015159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.015417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.015483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.015792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.015863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.016113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.016179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.016441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.016506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.016768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.016834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.017052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.017130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.017395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.017460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.017768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.017843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.018094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.018159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.018419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.018486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.018704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.018769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.019028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.019135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.019435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.019500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.019769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.019836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.020121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.020187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.020497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.020562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.020847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.020912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.021209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.021274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.021581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.021646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.021938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.022003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.022307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.022372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.022665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.022730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.280 [2024-07-25 13:52:51.023019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.280 [2024-07-25 13:52:51.023096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.280 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.023386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.023451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.023747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.023812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.024072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.024137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.024380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.024454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.024666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.024730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.024974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.025040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.025281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.025346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.025599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.025662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.025914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.025981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.026250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.026316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.026566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.026630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.026820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.026885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.027174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.027241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.027489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.027553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.027837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.027901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.028155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.028231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.028463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.028527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.028785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.028851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.029108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.029174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.029475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.029540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.029795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.029880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.030125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.030193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.030457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.030521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.030820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.030904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.031213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.031289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.031581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.031647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.031864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.031929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.032178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.032246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.032559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.032625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.032839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.032906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.033206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.033272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.033578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.033644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.033907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.033974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.034246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.281 [2024-07-25 13:52:51.034312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.281 qpair failed and we were unable to recover it. 00:23:54.281 [2024-07-25 13:52:51.034570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.034636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.034877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.034943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.035232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.035305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.035580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.035647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.035869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.035936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.036132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.036210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.036507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.036574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.036790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.036857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.037164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.037231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.037524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.037587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.037847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.037914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.038199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.038267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.038545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.038609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.038902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.038968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.039280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.039348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.039564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.039630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.039902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.039966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.040280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.040346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.040644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.040712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.040943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.041009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.041246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.041311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.041602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.041666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.041924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.042002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.042276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.042353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.042671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.042745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.043033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.043129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.043349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.043415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.043718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.043797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.044100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.282 [2024-07-25 13:52:51.044168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.282 qpair failed and we were unable to recover it. 00:23:54.282 [2024-07-25 13:52:51.044459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.044523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.044787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.044851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.045081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.045149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.045442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.045509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.045805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.045870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.046158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.046223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.046488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.046561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.046858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.046934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.047185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.047253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.047446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.047512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.047766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.047831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.048038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.048119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.048351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.048428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.048722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.048788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.049006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.049085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.049350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.049415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.049702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.049776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.050053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.050135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.050431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.050498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.050754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.050819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.051092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.051175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.051458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.051526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.051749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.051816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.052083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.052149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.052403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.052468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.052719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.052800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.053089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.053156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.053425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.053489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.053799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.053864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.054164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.054237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.054530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.054596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.054861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.054925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.055165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.055230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.055529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.055594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.283 qpair failed and we were unable to recover it. 00:23:54.283 [2024-07-25 13:52:51.055875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.283 [2024-07-25 13:52:51.055944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.056237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.056306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.056560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.056626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.056912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.056978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.057256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.057323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.057628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.057694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.057915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.057982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.058249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.058315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.058528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.058592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.058818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.058886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.059129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.059197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.059451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.059516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.059772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.059839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.060108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.060188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.060472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.060541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.060804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.060870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.061157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.061222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.061483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.061548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.061837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.061905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.062161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.062228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.062444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.062510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.062804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.062869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.063089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.063161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.063435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.063512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.063773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.063839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.064092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.064158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.064447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.064512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.064823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.064900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.065126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.065195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.065505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.065571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.065869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.065934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.066164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.066233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.066524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.066592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.066854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.066920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.284 qpair failed and we were unable to recover it. 00:23:54.284 [2024-07-25 13:52:51.067188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.284 [2024-07-25 13:52:51.067258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.067511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.067591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.067880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.067947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.068221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.068288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.068579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.068644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.068925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.069000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.069315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.069382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.069589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.069654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.069909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.069974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.070245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.070312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.070606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.070671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.070979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.071077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.071299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.071368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.071659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.071726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.071983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.072049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.072298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.072366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.072622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.072690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.072982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.073048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.073328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.073393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.073610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.073684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.073941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.074020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.074346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.074411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.074701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.074766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.074962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.075027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.075311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.075395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.075695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.075762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.076085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.076151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.076401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.076468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.076722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.076787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.077014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.077101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.077337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.077404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.077639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.077705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.077919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.077984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.078264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.078334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.285 [2024-07-25 13:52:51.078631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.285 [2024-07-25 13:52:51.078712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.285 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.078932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.078998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.079281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.079348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.079608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.079673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.079957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.080023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.080238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.080306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.080555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.080620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.080907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.080972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.081257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.081324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.081568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.081646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.081911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.081977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.082240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.082308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.082603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.082669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.082916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.082992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.083306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.083374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.083617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.083682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.083915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.083980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.084268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.084335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.084596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.084665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.084937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.085004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.085371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.085473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.085785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.085855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.086144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.086216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.086511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.086590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.086833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.086900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.087210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.087289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.087610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.087677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.087932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.087999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.088314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.088380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.088647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.088713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.089000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.089078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.089304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.089373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.089615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.089682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.089930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.089997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.090310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.286 [2024-07-25 13:52:51.090376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.286 qpair failed and we were unable to recover it. 00:23:54.286 [2024-07-25 13:52:51.090633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.090698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.090980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.091045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.091323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.091388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.091634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.091700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.091985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.092052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.092330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.092407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.092670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.092737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.092976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.093044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.093308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.093390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.093583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.093612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.093743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.093773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.093887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.093917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.094017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.094047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.094156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.094186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.094341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.094371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.094494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.094558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.094771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.094835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.095005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.095035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.095184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.095210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.095326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.095352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.095471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.095497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.095590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.095616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.095734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.095760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.095878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.095904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.095988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.096014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.096130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.096156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.096270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.096296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.096385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.096411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.287 [2024-07-25 13:52:51.096528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.287 [2024-07-25 13:52:51.096554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.287 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.096660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.096686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.096779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.096809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.096893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.096918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.097006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.097032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.097140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.097166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.097280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.097305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.097393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.097419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.097533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.097559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.097676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.097702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.097785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.097811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.097927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.097953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.098040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.098073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.098165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.098192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.098302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.098335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.098477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.098503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.098622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.098648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.098762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.098788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.098900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.098926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.099016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.099042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.099159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.099185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.099312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.099355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.099484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.099513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.099670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.099728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.099959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.100018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.100201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.100228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.100309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.100335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.100518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.100547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.100772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.100833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.101077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.101128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.101224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.101250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.101331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.101358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.101475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.101501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.101639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.101700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.101905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.288 [2024-07-25 13:52:51.101967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.288 qpair failed and we were unable to recover it. 00:23:54.288 [2024-07-25 13:52:51.102207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.102234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.102353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.102379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.102489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.102515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.102737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.102796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.102949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.103008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.103188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.103215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.103302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.103327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.103468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.103501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.103645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.103674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.103923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.103982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.104166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.104192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.104307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.104335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.104503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.104563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.104788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.104848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.105039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.105119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.105265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.105291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.105424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.105452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.105553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.105581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.105740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.105799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.105981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.106042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.106222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.106248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.106371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.106397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.106483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.106509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.106648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.106676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.106823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.106852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.107147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.107173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.107260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.289 [2024-07-25 13:52:51.107286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.289 qpair failed and we were unable to recover it. 00:23:54.289 [2024-07-25 13:52:51.107377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.107404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.107498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.107577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.107798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.107858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.108090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.108146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.108242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.108268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.108387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.108412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.108491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.108550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.108785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.108812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.109082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.109135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.109254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.109280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.109414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.109442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.109567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.109596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.109721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.109750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.109935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.109991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.110185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.110212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.110353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.110397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.110523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.110552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.110716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.110770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.110976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.111031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.111208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.111234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.111369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.111403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.111505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.111534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.111639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.111667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.290 [2024-07-25 13:52:51.111791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.290 [2024-07-25 13:52:51.111820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.290 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.111949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.111977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.112073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.112129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.112226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.112252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.112385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.112413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.112561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.112589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.112731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.112760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.113037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.113115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.113201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.113227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.113342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.113386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.113572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.113627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.113852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.113906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.114071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.114116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.114202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.114228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.114316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.114360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.114584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.114640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.114853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.114908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.115096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.115139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.115255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.115281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.115390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.115419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.115512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.115540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.115633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.115661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.115756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.115785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.115899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.115928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.291 qpair failed and we were unable to recover it. 00:23:54.291 [2024-07-25 13:52:51.116030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.291 [2024-07-25 13:52:51.116066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.116209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.116235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.116384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.116413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.116503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.116531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.116628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.116656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.116777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.116823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.116997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.117025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.117176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.117203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.117295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.117321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.117440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.117469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.117645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.117696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.117881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.117945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.118129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.118156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.118268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.118298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.118395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.118424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.118516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.118545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.118712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.118737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.118828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.118853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.118938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.118963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.119074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.119100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.119264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.119292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.119383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.119413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.119592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.119643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.119812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.119841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.292 [2024-07-25 13:52:51.120083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.292 [2024-07-25 13:52:51.120129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.292 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.120282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.120311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.120398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.120426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.120520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.120549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.120673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.120702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.120829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.120858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.121022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.121115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.121245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.121273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.121408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.121437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.121562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.121592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.121767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.121818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.122010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.122076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.122207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.122237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.122362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.122390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.122517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.122546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.122665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.122694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.122829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.122858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.123009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.123038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.123142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.123171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.123329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.123357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.293 qpair failed and we were unable to recover it. 00:23:54.293 [2024-07-25 13:52:51.123467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.293 [2024-07-25 13:52:51.123496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.123594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.123622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.123800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.123851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.124090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.124120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.124248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.124277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.124407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.124436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.124621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.124673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.124929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.124954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.125092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.125119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.125204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.125253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.125352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.125381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.125511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.125540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.125633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.125700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.125908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.125961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.126167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.126196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.126321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.126349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.126446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.126474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.126586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.126615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.126796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.126861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.126971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.127003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.294 [2024-07-25 13:52:51.127134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.294 [2024-07-25 13:52:51.127165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.294 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.127321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.127350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.127515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.127567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.127666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.127695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.127894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.127919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.128004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.128029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.128167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.128193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.128323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.128371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.128524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.128573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.128764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.128814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.128950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.128979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.129132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.129162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.129259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.129293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.129419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.129499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.129780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.129827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.129981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.130031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.130231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.130257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.130408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.130437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.130613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.130661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.130865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.130913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.131071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.131124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.131260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.131289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.131466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.131514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.131602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.131627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.295 qpair failed and we were unable to recover it. 00:23:54.295 [2024-07-25 13:52:51.131737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.295 [2024-07-25 13:52:51.131763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.131915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.131963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.132101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.132134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.132235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.132264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.132418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.132446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.132545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.132579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.132785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.132832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.133024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.133084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.133204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.133233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.133364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.133393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.133567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.133615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.133795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.133843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.134038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.134130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.134254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.134283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.134478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.134526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.134732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.134779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.134997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.135045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.135216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.135244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.135344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.135372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.135471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.135499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.135588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.135616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.135710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.135738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.135945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.136011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.136187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.136215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.136345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.296 [2024-07-25 13:52:51.136374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.296 qpair failed and we were unable to recover it. 00:23:54.296 [2024-07-25 13:52:51.136467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.136497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.136695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.136743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.136944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.136993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.137146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.137175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.137314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.137355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.137469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.137495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.137579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.137606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.137696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.137753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.137910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.137958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.138107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.138137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.138260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.138289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.138411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.138439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.138563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.138591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.138773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.138821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.139016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.139045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.139149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.139178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.139297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.139326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.139500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.139547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.139739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.139787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.139967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.140015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.140158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.140191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.140315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.140347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.140482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.140510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.140612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.140641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.140741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.140770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.140924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.140972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.297 [2024-07-25 13:52:51.141164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.297 [2024-07-25 13:52:51.141193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.297 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.141313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.141341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.141489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.141517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.141733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.141782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.141934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.141980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.142127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.142154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.142281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.142310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.142437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.142466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.142564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.142593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.142692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.142721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.142815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.142844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.142979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.143007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.143182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.143212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.143343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.143372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.143494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.143523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.143669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.143717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.143885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.143930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.144145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.144174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.144267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.144296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.144446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.144474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.144564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.144592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.144690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.144719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.144849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.144878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.144967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.144995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.145110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.145139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.145273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.145298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.145425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.145450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.145578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.145623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.145796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.145841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.146011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.146056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.146190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.146220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.298 [2024-07-25 13:52:51.146389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.298 [2024-07-25 13:52:51.146435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.298 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.146570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.146615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.146769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.146815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.147007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.147072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.147265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.147311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.147466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.147514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.147692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.147737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.147954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.148000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.148175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.148223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.148441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.148486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.148653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.148700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.148879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.148925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.149101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.149147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.149350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.149376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.149497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.149523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.149633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.149677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.149876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.149902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.150025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.150050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.150205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.150251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.150470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.150515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.150658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.150704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.150858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.150903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.151083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.151129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.151302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.151348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.151533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.151559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.151644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.151670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.151774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.151800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.151877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.151903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.151987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.152013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.152114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.152166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.152355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.152401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.152587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.152632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.152789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.152835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.152987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.153033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.153182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.299 [2024-07-25 13:52:51.153228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.299 qpair failed and we were unable to recover it. 00:23:54.299 [2024-07-25 13:52:51.153398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.153443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.153590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.153636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.153863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.153908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.154049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.154110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.154290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.154336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.154497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.154542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.154726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.154752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.154843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.154870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.155007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.155037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.155188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.155242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.155469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.155524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.155718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.155765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.155953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.155999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.156197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.156243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.156393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.156437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.156635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.156660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.156778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.156804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.156922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.156967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.157143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.157189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.157347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.157392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.157543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.157588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.157744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.157791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.157983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.158030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.158195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.158245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.158450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.158497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.158674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.158718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.158897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.158942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.300 [2024-07-25 13:52:51.159081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.300 [2024-07-25 13:52:51.159127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.300 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.159279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.159323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.159467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.159514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.159744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.159800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.159981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.160026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.160211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.160257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.160410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.160454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.160603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.160647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.160778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.160831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.160997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.161041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.161204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.161251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.161438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.161483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.161681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.161725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.161937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.161981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.162162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.162208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.162427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.162471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.162625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.162681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.162870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.162914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.163106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.163153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.163333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.163358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.163538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.163582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.163748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.163773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.163891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.163917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.164111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.164157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.164309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.164355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.164503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.164548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.164771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.164816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.164997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.165041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.165269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.165313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.165493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.165539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.165699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.165745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.165926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.165971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.166147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.166193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.166391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.166454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.301 [2024-07-25 13:52:51.166681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.301 [2024-07-25 13:52:51.166706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.301 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.166908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.166933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.167141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.167191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.167376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.167421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.167600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.167646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.167798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.167842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.168020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.168073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.168254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.168300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.168444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.168488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.168628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.168682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.168922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.168967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.169130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.169177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.169355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.169400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.169635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.169680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.169914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.169965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.170178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.170224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.170417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.170461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.170635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.170680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.170896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.170940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.171117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.171178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.171312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.171357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.171543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.171587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.171760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.171805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.171990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.172037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.172212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.172257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.172436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.172481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.172665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.172709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.172858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.172904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.173078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.173150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.173366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.173391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.173509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.173534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.173671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.173720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.173916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.173966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.174198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.174246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.174441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.302 [2024-07-25 13:52:51.174488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.302 qpair failed and we were unable to recover it. 00:23:54.302 [2024-07-25 13:52:51.174690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.174740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.174945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.174993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.175204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.175253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.175411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.175485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.175741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.175788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.175956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.176016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.176191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.176248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.176481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.176528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.176697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.176744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.176934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.176982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.177189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.177238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.177432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.177457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.177542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.177569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.177706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.177732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.177850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.177923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.178177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.178226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.178396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.178444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.178671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.178719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.178895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.178942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.179187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.179249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.179447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.179494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.179697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.179744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.179930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.179977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.180132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.180205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.180429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.180479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.180779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.180844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.181080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.181129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.181362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.181409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.181604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.181651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.181841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.181888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.182123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.182181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.182385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.182435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.182624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.182671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.303 [2024-07-25 13:52:51.182869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.303 [2024-07-25 13:52:51.182916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.303 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.183114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.183163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.183351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.183399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.183668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.183734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.184011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.184091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.184306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.184354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.184584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.184632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.184822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.184869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.185113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.185171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.185369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.185416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.185610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.185657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.185838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.185885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.186092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.186141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.186302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.186349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.186529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.186578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.186742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.186789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.186997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.187045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.187254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.187301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.187493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.187540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.187729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.187778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.187980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.188037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.188245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.188292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.188523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.188572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.188761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.188808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.189006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.189031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.189115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.189141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.189279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.189308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.189519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.189568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.189770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.189818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.189974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.190031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.190281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.190329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.190526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.190572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.190767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.190814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.191010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.191069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.304 qpair failed and we were unable to recover it. 00:23:54.304 [2024-07-25 13:52:51.191312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.304 [2024-07-25 13:52:51.191360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.191559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.191607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.191797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.191845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.192050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.192106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.192294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.192342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.192530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.192581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.192780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.192829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.193020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.193082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.193240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.193287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.193475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.193522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.193710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.193758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.193947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.194007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.194272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.194322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.194513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.194561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.194747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.194794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.194948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.194995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.195240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.195288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.195478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.195525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.195685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.195734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.195959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.196007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.196219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.196267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.196454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.196502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.196700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.196747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.196888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.196937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.197159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.197214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.197408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.197460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.197673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.197724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.197932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.197982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.305 [2024-07-25 13:52:51.198194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.305 [2024-07-25 13:52:51.198246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.305 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.198411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.198464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.198680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.198728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.198972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.199020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.199255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.199316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.199524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.199549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.199665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.199690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.199805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.199831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.199965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.200012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.200250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.200298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.200457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.200513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.200708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.200760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.200955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.201006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.201208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.201259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.201417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.201467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.201635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.201686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.201849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.201911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.202162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.202215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.202391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.202443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.202636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.202686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.202887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.202938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.203177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.203229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.203427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.203489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.203704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.203756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.203920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.203966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.204045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.204079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.204173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.204198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.204287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.204348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.204517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.204570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.204760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.204811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.205028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.205095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.205311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.205363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.205517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.205567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.205734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.205785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.205897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.306 [2024-07-25 13:52:51.205922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.306 qpair failed and we were unable to recover it. 00:23:54.306 [2024-07-25 13:52:51.206043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.206107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.206267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.206319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.206520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.206583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.206795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.206847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.207052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.207084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.207207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.207233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.207315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.207340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.207450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.207475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.207565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.207590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.207678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.207707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.207791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.207839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.208025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.208111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.208287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.208339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.208527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.208578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.208780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.208855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.209099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.209150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.209354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.209405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.209614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.209667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.209827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.209879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.210087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.210138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.210383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.210433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.210676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.210702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.210789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.210814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.210931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.210963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.211122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.211176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.211428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.211479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.211663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.211714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.211914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.211966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.212125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.212176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.212376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.212426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.212629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.212681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.212904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.212956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.213156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.213208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.213384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.307 [2024-07-25 13:52:51.213434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.307 qpair failed and we were unable to recover it. 00:23:54.307 [2024-07-25 13:52:51.213642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.213692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.213911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.213971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.214196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.214249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.214441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.214491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.214733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.214784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.215039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.215106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.215355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.215411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.215672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.215731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.215918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.215970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.216158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.216211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.216426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.216476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.216722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.216772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.216940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.216993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.217263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.217316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.217518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.217568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.217805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.217863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.218054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.218120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.218329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.218391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.218604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.218665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.218874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.218899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.218989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.219015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.219181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.219232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.219393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.219443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.219690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.219741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.219966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.219992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.220108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.220133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.220292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.220343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.220593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.220645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.220883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.220935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.221158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.221209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.221391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.221442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.221629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.221685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.221883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.221926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.222136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.308 [2024-07-25 13:52:51.222188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.308 qpair failed and we were unable to recover it. 00:23:54.308 [2024-07-25 13:52:51.222345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.222395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.222563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.222614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.222849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.222909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.223086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.223139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.223340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.223391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.223560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.223609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.223753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.223803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.223957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.224008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.224247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1199230 is same with the state(5) to be set 00:23:54.309 [2024-07-25 13:52:51.224543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.224620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.224829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.224883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.225104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.225157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.225361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.225411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.225616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.225664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.225826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.225876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.226092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.226146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.226352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.226402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.226601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.226652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.226864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.226913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.227153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.227204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.227371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.227421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.227618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.227667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.227888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.227938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.228141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.228192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.228346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.228396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.228633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.228683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.228901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.228926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.229047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.229078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.229224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.229249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.229464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.229514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.229752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.229802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.230040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.230101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.230302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.230352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.230531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.230581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.309 [2024-07-25 13:52:51.230740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.309 [2024-07-25 13:52:51.230790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.309 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.230961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.231011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.231237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.231315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.231496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.231552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.231781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.231832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.232051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.232140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.232330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.232381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.232558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.232609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.232811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.232862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.233087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.233138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.233337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.233400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.233611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.233662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.233890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.233942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.234186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.234238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.234447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.234497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.234724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.234775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.235030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.235101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.235274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.235325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.235520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.235570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.235811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.235861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.236085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.236140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.236349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.236404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.236634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.236689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.236900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.236960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.237163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.237220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.237456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.237520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.237716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.237771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.237989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.238045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.238284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.238344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.310 [2024-07-25 13:52:51.238552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.310 [2024-07-25 13:52:51.238604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.310 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.238774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.238839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.239089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.239163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.239390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.239445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.239630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.239684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.239865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.239920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.240143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.240200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.240411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.240475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.240700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.240750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.240956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.241008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.241276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.241328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.241588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.241656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.241926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.241981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.242208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.242265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.242453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.242508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.242763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.242813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.243022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.243089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.243317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.243369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.243611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.243662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.243918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.243944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.244078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.244104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.244184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.244210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.244391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.244447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.244700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.244765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.244978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.245032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.245223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.245288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.245487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.245512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.245698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.245753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.245929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.245983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.246183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.246255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.246520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.246576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.246832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.246887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.247096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.247152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.247334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.247387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.311 [2024-07-25 13:52:51.247606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.311 [2024-07-25 13:52:51.247660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.311 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.247835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.247909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.248153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.248209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.248411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.248466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.248689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.248743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.249020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.249101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.249328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.249383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.249643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.249669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.249785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.249810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.249964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.250019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.250272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.250328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.250548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.250604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.250787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.250841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.251118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.251175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.251349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.251405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.251617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.251672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.251896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.251950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.252150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.252205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.252464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.252530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.252747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.252803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.253079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.253135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.253335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.253389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.253641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.253696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.253949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.254015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.254224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.254280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.254509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.254535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.254627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.254654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.254765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.254831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.255014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.255080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.255339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.255393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.255600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.255664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.255850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.255905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.256144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.256200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.256411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.256466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.256716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.256770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.257026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.257108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.257378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.257432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.312 [2024-07-25 13:52:51.257652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.312 [2024-07-25 13:52:51.257708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.312 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.257930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.257984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.258219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.258273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.258439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.258493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.258717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.258743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.258827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.258853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.258937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.258962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.259053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.259086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.259172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.259202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.259342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.259367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.259544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.259599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.259767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.259822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.260005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.260089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.260298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.260353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.260603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.260659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.260823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.260881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.261111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.261165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.261374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.261428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.261630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.261660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.261755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.261780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.261910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.261965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.262191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.262247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.262465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.262519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.262685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.262738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.262951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.263022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.263246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.263302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.263556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.263620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.263808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.263862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.264118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.264174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.264339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.264394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.264617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.264672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.264910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.264965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.265209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.265236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.265323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.265348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.265460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.265485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.265572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.265598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.265693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.313 [2024-07-25 13:52:51.265718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.313 qpair failed and we were unable to recover it. 00:23:54.313 [2024-07-25 13:52:51.265801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.265826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.265942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.265975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.266135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.266192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.266401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.266455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.266660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.266715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.266932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.266988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.267262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.267317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.267507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.267561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.267740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.267814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.268010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.268094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.268372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.268428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.268667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.268730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.268914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.268981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.269233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.269289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.269457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.269513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.269787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.269842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.270000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.270055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.270276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.270330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.270596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.270656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.270915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.270985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.271239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.271265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.271392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.271417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.271593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.271651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.271898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.271957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.272242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.272298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.272494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.272552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.272753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.272808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.272989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.273043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.273312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.273366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.273539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.314 [2024-07-25 13:52:51.273596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.314 qpair failed and we were unable to recover it. 00:23:54.314 [2024-07-25 13:52:51.273869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.273923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.274175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.274231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.274395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.274446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.274671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.274725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.274901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.274964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.275237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.275302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.275435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.275461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.275653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.275678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.275777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.275802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.275983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.276039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.276293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.276348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.276604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.276665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.276945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.277003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.277256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.277309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.277516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.277590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.277776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.277829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.278052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.278125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.278403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.278462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.278741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.278817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.279108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.279163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.279390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.279444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.279666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.279731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.279986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.280040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.280273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.280329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.280514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.280569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.280797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.280852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.281052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.281131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.281358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.281413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.281580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.281637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.281848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.281904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.282120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.282175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.282407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.315 [2024-07-25 13:52:51.282461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.315 qpair failed and we were unable to recover it. 00:23:54.315 [2024-07-25 13:52:51.282713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.316 [2024-07-25 13:52:51.282769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.316 qpair failed and we were unable to recover it. 00:23:54.316 [2024-07-25 13:52:51.282945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.316 [2024-07-25 13:52:51.282997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.316 qpair failed and we were unable to recover it. 00:23:54.316 [2024-07-25 13:52:51.283217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.316 [2024-07-25 13:52:51.283274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.316 qpair failed and we were unable to recover it. 00:23:54.316 [2024-07-25 13:52:51.283503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.316 [2024-07-25 13:52:51.283557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.316 qpair failed and we were unable to recover it. 00:23:54.316 [2024-07-25 13:52:51.283747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.316 [2024-07-25 13:52:51.283801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.316 qpair failed and we were unable to recover it. 00:23:54.316 [2024-07-25 13:52:51.284051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.316 [2024-07-25 13:52:51.284175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.316 qpair failed and we were unable to recover it. 00:23:54.316 [2024-07-25 13:52:51.284354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.316 [2024-07-25 13:52:51.284410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.316 qpair failed and we were unable to recover it. 00:23:54.316 [2024-07-25 13:52:51.284632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.316 [2024-07-25 13:52:51.284687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.316 qpair failed and we were unable to recover it. 00:23:54.316 [2024-07-25 13:52:51.284908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.316 [2024-07-25 13:52:51.284963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.316 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.285181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.285238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.285458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.285513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.285734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.285790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.285973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.286027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.286295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.286352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.286609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.286663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.286874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.286928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.287202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.287260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.287476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.287530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.287749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.287804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.288034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.288103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.288281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.288334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.288553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.288578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.288662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.288688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.288774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.288799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.589 qpair failed and we were unable to recover it. 00:23:54.589 [2024-07-25 13:52:51.288886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.589 [2024-07-25 13:52:51.288911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.289121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.289180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.289474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.289533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.289764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.289822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.290073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.290128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.290332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.290400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.290631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.290686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.290901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.290961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.291187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.291243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.291465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.291490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.291589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.291619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.291716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.291741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.291823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.291877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.292118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.292175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.292404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.292458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.292697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.292751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.292955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.293010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.293266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.293293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.293385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.293410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.293522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.293549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.293674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.293700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.293862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.293916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.294173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.294199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.294285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.294310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.294444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.294469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.294548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.294601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.294863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.294918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.295104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.295162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.295396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.295451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.295702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.295756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.296003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.296091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.296396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.296451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.296726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.296782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.297003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.297057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.590 [2024-07-25 13:52:51.297288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.590 [2024-07-25 13:52:51.297341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.590 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.297552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.297620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.297805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.297861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.298078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.298104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.298339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.298394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.298564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.298618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.298823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.298879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.299104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.299168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.299440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.299497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.299676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.299731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.299982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.300036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.300252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.300315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.300559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.300626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.300929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.300989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.301239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.301299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.301485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.301543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.301785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.301840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.302077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.302137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.302367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.302422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.302661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.302722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.302988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.303046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.303289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.303347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.303615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.303690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.304001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.304076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.304323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.304379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.304638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.304692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.304937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.304995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.305215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.305273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.305497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.305551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.305828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.305905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.306117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.306192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.306447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.306524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.306754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.306819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.307107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.307164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.591 [2024-07-25 13:52:51.307430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.591 [2024-07-25 13:52:51.307486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.591 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.307710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.307763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.307985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.308040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.308322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.308382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.308625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.308684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.308935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.308994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.309192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.309251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.309523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.309581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.309814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.309884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.310102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.310163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.310398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.310457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.310649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.310709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.310938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.310997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.311227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.311298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.311577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.311636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.311874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.311934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.312151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.312210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.312411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.312469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.312756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.312826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.313138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.313199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.313428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.313488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.313727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.313785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.314013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.314085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.314357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.314417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.314691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.314756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.315006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.315078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.315329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.315390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.315651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.315711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.315907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.315967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.316196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.316257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.316495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.316552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.316790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.316849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.317046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.317144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.317432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.317491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.317693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.592 [2024-07-25 13:52:51.317752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.592 qpair failed and we were unable to recover it. 00:23:54.592 [2024-07-25 13:52:51.317996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.318054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.318309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.318368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.318548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.318608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.318855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.318916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.319194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.319274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.319529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.319606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.319829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.319888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.320160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.320219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.320419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.320481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.320681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.320748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.321038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.321112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.321357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.321415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.321651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.321709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.321952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.322013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.322281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.322340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.322564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.322621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.322839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.322896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.323130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.323191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.323399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.323486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.323723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.323781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.323981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.324041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.324266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.324324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.324507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.324567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.324767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.324827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.325136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.325197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.325413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.325471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.325701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.325759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.325992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.326051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.326258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.326330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.326547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.326607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.326882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.326942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.327176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.327235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.327472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.327530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.327700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.327758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.593 qpair failed and we were unable to recover it. 00:23:54.593 [2024-07-25 13:52:51.327967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.593 [2024-07-25 13:52:51.328028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.328313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.328372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.328668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.328728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.328952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.329011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.329249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.329313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.329588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.329647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.329840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.329900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.330127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.330186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.330467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.330525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.330710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.330768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.330992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.331052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.331284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.331342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.331593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.331651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.331889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.331953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.332186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.332247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.332521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.332588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.332783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.332841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.333047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.333140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.333437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.333496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.333731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.333791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.334090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.334148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.334386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.334446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.334639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.334714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.334978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.335037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.335302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.335363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.335632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.335690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.335924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.335984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.594 [2024-07-25 13:52:51.336290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.594 [2024-07-25 13:52:51.336359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.594 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.336644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.336718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.336979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.337038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.337321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.337401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.337619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.337676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.337893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.337958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.338143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.338202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.338464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.338542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.338841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.338916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.339191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.339272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.339524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.339603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.339836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.339899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.340097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.340156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.340455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.340531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.340777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.340852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.341134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.341216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.341488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.341565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.341849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.341908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.342151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.342228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.342440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.342533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.342741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.342801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.343081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.343143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.343338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.343420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.343672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.343747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.344016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.344117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.344426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.344506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.344744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.344820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.345056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.345129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.345428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.345514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.345784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.345861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.346092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.346152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.346413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.346491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.346758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.346817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.347101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.347174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.347449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.595 [2024-07-25 13:52:51.347525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.595 qpair failed and we were unable to recover it. 00:23:54.595 [2024-07-25 13:52:51.347822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.347897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.348124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.348182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.348408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.348481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.348706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.348791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.349038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.349120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.349377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.349452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.349716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.349792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.350089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.350151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.350428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.350501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.350794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.350868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.351095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.351151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.351443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.351515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.351817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.351892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.352085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.352141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.352447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.352530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.352747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.352821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.353002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.353057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.353304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.353362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.353541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.353598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.353788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.353844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.354133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.354198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.354464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.354520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.354752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.354807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.355079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.355135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.355425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.355482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.355785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.355862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.356077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.356132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.356429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.356506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.356761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.356836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.357118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.357175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.357422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.357507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.357793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.357852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.358129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.358184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.358427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.358516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.596 qpair failed and we were unable to recover it. 00:23:54.596 [2024-07-25 13:52:51.358825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.596 [2024-07-25 13:52:51.358884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.359099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.359167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.359429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.359505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.359764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.359833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.360085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.360162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.360355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.360430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.360724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.360811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.361134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.361190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.361472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.361551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.361855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.361916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.362188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.362266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.362546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.362605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.362848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.362912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.363155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.363216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.363489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.363547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.363776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.363836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.364076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.364136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.364345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.364406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.364682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.364741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.364968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.365028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.365352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.365419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.365664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.365723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.365960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.366020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.366315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.366375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.366609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.366668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.366882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.366943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.367186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.367256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.367503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.367562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.367756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.367816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.368054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.368127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.368423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.368482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.368774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.368839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.369134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.369196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.369461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.369536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.369834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.597 [2024-07-25 13:52:51.369909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.597 qpair failed and we were unable to recover it. 00:23:54.597 [2024-07-25 13:52:51.370106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.370183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.370462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.370541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.370847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.370924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.371239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.371319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.371535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.371629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.371862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.371923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.372236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.372314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.372578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.372654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.372904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.372966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.373252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.373357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.373667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.373743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.374017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.374120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.374441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.374518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.374785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.374845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.375159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.375238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.375508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.375588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.375873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.375932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.376193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.376271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.376543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.376622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.376867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.376927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.377170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.377249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.377520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.377609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.377850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.377908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.378096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.378157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.378372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.378454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.378710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.378787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.379080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.379142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.379384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.379477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.379731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.379806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.380050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.380127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.380389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.380448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.380709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.380784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.381023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.381112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.381418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.598 [2024-07-25 13:52:51.381496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.598 qpair failed and we were unable to recover it. 00:23:54.598 [2024-07-25 13:52:51.381756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.381830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.382095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.382154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.382389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.382465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.382684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.382758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.383054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.383149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.383397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.383475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.383721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.383797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.384091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.384152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.384463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.384539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.384842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.384917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.385143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.385213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.385491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.385567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.385859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.385934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.386204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.386262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.386536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.386597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.386843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.386919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.387138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.387218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.387515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.387591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.387883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.387959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.388154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.388215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.388478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.388553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.388820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.388895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.389188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.389266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.389570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.389646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.389888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.389947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.390146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.390205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.390500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.390561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.390833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.390891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.391150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.599 [2024-07-25 13:52:51.391229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.599 qpair failed and we were unable to recover it. 00:23:54.599 [2024-07-25 13:52:51.391524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.391600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.391833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.391892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.392185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.392262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.392473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.392548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.392771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.392828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.393121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.393200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.393466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.393541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.393698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.393755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.393959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.394018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.394349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.394426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.394685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.394762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.394999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.395057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.395396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.395476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.395768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.395844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.396084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.396144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.396401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.396476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.396712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.396788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.397053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.397126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.397378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.397456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.397704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.397781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.398004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.398090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.398345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.398431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.398735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.398812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.399117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.399202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.399511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.399585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.399895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.399970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.400252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.400310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.400587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.400645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.400947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.401022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.401309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.401384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.401681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.401757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.402001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.402074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.402376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.402451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.402758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.402833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.600 qpair failed and we were unable to recover it. 00:23:54.600 [2024-07-25 13:52:51.403054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.600 [2024-07-25 13:52:51.403126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.403345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.403423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.403729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.403805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.404040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.404115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.404376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.404452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.404714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.404790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.404971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.405028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.405315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.405376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.405642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.405719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.405917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.405976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.406293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.406371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.406631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.406706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.406894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.406957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.407197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.407274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.407536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.407613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.407880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.407938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.408170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.408231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.408480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.408557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.408851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.408927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.409187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.409265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.409512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.409589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.409819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.409876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.410133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.410212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.410460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.410535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.410808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.410866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.411099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.411158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.411408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.411483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.411678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.411745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.412017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.412086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.412372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.412430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.412628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.412704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.412985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.413044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.413243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.413300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.413519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.413594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.601 qpair failed and we were unable to recover it. 00:23:54.601 [2024-07-25 13:52:51.413827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.601 [2024-07-25 13:52:51.413885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.414095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.414154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.414411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.414469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.414663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.414738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.414978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.415035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.415230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.415288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.415591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.415667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.415910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.415968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.416283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.416360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.416671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.416746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.416941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.417001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.417280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.417357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.417636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.417694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.417874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.417933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.418149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.418231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.418527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.418604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.418791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.418850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.419047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.419138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.419420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.419479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.419739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.419817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.420076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.420138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.420366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.420442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.420744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.420820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.421017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.421087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.421362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.421439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.421735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.421810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.422037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.422108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.422339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.422417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.422693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.422769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.423002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.423089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.423384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.423458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.423662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.423738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.424017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.424089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.424322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.424408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.602 qpair failed and we were unable to recover it. 00:23:54.602 [2024-07-25 13:52:51.424691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.602 [2024-07-25 13:52:51.424750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.424971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.425029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.425315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.425375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.425543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.425603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.425880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.425937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.426216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.426294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.426508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.426587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.426786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.426844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.427056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.427127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.427401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.427461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.427692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.427752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.428034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.428106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.428405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.428482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.428738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.428796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.429078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.429138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.429366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.429424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.429680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.429739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.429929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.429987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.430275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.430353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.430614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.430689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.430957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.431015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.431377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.431476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.431754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.431820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.432134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.432195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.432490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.432554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.432820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.432883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.433146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.433207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.433478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.433537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.433798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.433860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.434118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.434177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.434451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.434513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.434765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.434827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.435089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.435167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.435432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.435489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.435781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.603 [2024-07-25 13:52:51.435844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.603 qpair failed and we were unable to recover it. 00:23:54.603 [2024-07-25 13:52:51.436108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.436168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.436428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.436490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.436786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.436849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.437105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.437164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.437382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.437440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.437681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.437747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.437999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.438075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.438326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.438386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.438675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.438738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.438999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.439074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.439322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.439380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.439670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.439733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.439942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.440004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.440315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.440404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.440727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.440807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.441085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.441145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.441341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.441398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.441654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.441731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.441993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.442097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.442367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.442425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.442694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.442771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.443039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.443129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.443371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.443430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.443639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.443717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.443941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.444001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.444296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.444355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.444600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.444676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.444945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.445004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.445289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.445366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.445616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.445691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.445913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.445971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.446254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.446330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.446659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.604 [2024-07-25 13:52:51.446735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.604 qpair failed and we were unable to recover it. 00:23:54.604 [2024-07-25 13:52:51.446967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.447026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.447337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.447414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.447660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.447736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.447964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.448021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.448270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.448346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.448607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.448682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.448920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.448978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.449295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.449373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.449623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.449701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.449985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.450043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.450308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.450384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.450693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.450769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.451057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.451134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.451385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.451460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.451770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.451845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.452094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.452154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.452427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.452486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.452773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.452848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.453120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.453179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.453479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.453556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.453817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.453892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.454127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.454187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.454459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.454537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.454791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.454867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.455131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.455210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.455467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.455553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.455779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.455855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.456109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.456185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.456470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.456545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.456785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.456843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.457114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.457172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.457428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.457506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.457799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.457874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.458140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.458219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.458494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.458551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.458751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.458827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.458996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.459054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.459289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.459366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.459698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.459774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.460016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.460089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.460340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.460399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.460598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.460675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.460920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.460978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.461184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.461244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.461499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.461574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.461836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.461912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.462206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.462282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.462541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.462617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.462844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.462902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.463131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.463192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.463442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.463519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.605 [2024-07-25 13:52:51.463774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.605 [2024-07-25 13:52:51.463850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.605 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.464073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.464133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.464401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.464478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.464776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.464851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.465150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.465226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.465530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.465606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.465800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.465858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.466142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.466203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.466469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.466545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.466731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.466791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.467024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.467101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.467340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.467417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.467724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.467800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.468025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.468101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.468374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.468460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.468716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.468794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.468988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.469047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.469298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.469375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.469672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.469748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.469974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.470033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.470248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.470326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.470630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.470705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.470935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.470995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.471232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.471292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.471525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.471583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.471853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.471930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.472241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.472319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.472487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.472545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.472761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.472840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.473030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.473101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.473378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.473455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.473754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.473829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.474052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.474122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.474435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.474511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.474766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.474843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.475079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.475138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.475379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.475437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.475737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.475813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.476050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.476132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.476408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.476467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.476646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.476707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.476954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.477013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.477232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.477312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.477606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.477683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.477917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.477975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.478199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.478276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.478483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.478561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.478875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.478951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.479189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.479267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.479528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.479605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.479899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.479976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.480289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.606 [2024-07-25 13:52:51.480367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.606 qpair failed and we were unable to recover it. 00:23:54.606 [2024-07-25 13:52:51.480626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.480700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.480933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.480990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.481297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.481384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.481633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.481709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.481900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.481958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.482269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.482347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.482640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.482716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.482943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.483002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.483335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.483414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.483665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.483740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.484000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.484087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.484282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.484343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.484603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.484662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.484905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.484964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.485284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.485362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.485632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.485690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.485893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.485953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.486206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.486284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.486539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.486614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.486842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.486903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.487150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.487228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.487476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.487552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.487796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.487853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.488084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.488146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.488406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.488481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.488679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.488739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.488966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.489026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.489344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.489425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.489726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.489802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.490042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.490116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.490350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.490428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.490683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.490759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.490987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.491047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.491312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.491388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.491692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.491767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.492031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.492116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.492432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.492516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.492734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.492809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.492999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.493057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.493279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.493355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.493648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.493723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.493953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.494012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.494284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.494380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.494650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.494724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.494954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.495013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.495305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.495383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.495581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.495658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.495927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.607 [2024-07-25 13:52:51.495986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.607 qpair failed and we were unable to recover it. 00:23:54.607 [2024-07-25 13:52:51.496267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.496345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.496603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.496679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.496946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.497004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.497319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.497405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.497666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.497743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.497944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.498004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.498321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.498399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.498655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.498732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.499012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.499083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.499389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.499465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.499760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.499837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.500024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.500134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.500446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.500521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.500774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.500851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.501104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.501165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.501429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.501488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.501703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.501783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.502009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.502081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.502335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.502411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.502693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.502768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.502969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.503027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.503262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.503338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.503563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.503621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.503855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.608 [2024-07-25 13:52:51.503914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.608 qpair failed and we were unable to recover it. 00:23:54.608 [2024-07-25 13:52:51.504199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.504277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.504577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.504653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.504887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.504946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.505211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.505270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.505532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.505607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.505804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.505864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.506118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.506199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.506458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.506532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.506724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.506783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.507024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.507096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.507394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.507479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.507776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.507852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.508046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.508134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.508443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.508520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.508719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.508796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.509019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.509091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.509343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.509418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.509610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.509686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.509952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.510009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.510250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.510327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.510609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.510667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.510863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.510923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.511133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.511210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.511483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.511543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.511874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.511949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.512158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.512238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.512505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.512580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.512737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.512796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.512972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.513029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.513283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.513360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.513594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.513670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.513868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.513926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.514166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.514243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.609 [2024-07-25 13:52:51.514512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.609 [2024-07-25 13:52:51.514589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.609 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.514865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.514922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.515116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.515174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.515408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.515485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.515757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.515832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.516084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.516144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.516394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.516470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.516711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.516786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.516979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.517038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.517359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.517436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.517746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.517822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.518082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.518142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.518321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.518381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.518676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.518752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.518979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.519039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.519300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.519377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.519686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.519761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.519996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.520080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.520277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.520353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.520585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.520662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.520848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.520910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.521203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.521281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.521586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.521663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.521899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.521958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.522247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.522325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.522571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.522647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.522885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.522942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.523220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.523298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.610 [2024-07-25 13:52:51.523504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.610 [2024-07-25 13:52:51.523579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.610 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.523851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.523910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.524165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.524244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.524513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.524597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.524845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.524903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.525145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.525221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.525409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.525485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.525731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.525805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.526038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.526112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.526396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.526455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.526706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.526782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.527009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.527098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.527291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.527350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.527532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.527589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.527815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.527873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.528099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.528159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.528480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.528557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.528854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.528915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.529207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.529271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.529553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.529630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.529933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.530004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.530265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.530358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.530647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.530723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.530967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.531041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.531324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.531415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.531719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.531797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.532024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.532100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.532383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.532444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.532717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.532777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.533051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.533145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.533466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.533544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.533872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.533949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.534131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.534191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.534423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.611 [2024-07-25 13:52:51.534501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.611 qpair failed and we were unable to recover it. 00:23:54.611 [2024-07-25 13:52:51.534777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.534855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.535106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.535171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.535459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.535546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.535739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.535798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.535990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.536048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.536310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.536388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.536691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.536767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.536958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.537020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.537339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.537418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.537754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.537831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.538123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.538183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.538443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.538518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.538811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.538889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.539199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.539277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.539523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.539598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.539853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.539930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.540220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.540297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.540617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.540694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.540933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.540991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.541255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.541335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.541598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.541687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.541956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.542015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.542314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.542393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.542661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.542740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.542995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.543055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.543329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.543418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.543674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.543750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.543962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.612 [2024-07-25 13:52:51.544028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.612 qpair failed and we were unable to recover it. 00:23:54.612 [2024-07-25 13:52:51.544287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.544321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.544472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.544507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.544674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.544715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.544845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.544878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.545025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.545103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.545294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.545352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.545644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.545703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.545944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.546014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.546233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.546266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.546424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.546458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.546568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.546601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.546776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.546809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.546953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.546990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.547107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.547141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.547404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.547483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.547817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.547893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.548188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.548276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.548584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.548674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.548963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.549030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.549306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.549386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.549665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.549745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.550000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.550075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.550370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.550455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.550669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.550747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.550952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.551012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.551264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.551342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.551602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.551680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.551920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.552009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.552297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.552390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.613 [2024-07-25 13:52:51.552712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.613 [2024-07-25 13:52:51.552790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.613 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.553024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.553103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.553384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.553473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.553771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.553857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.554084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.554145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.554430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.554508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.554788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.554865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.555189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.555252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.555514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.555602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.555850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.555910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.556185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.556268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.556544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.556620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.556869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.556931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.557266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.557345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.557604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.557679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.557910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.557969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.558192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.558271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.558574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.558654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.558933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.559002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.559284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.559362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.559632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.559691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.559965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.560023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.560254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.560332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.560628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.560703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.560947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.561005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.561273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.561351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.561601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.614 [2024-07-25 13:52:51.561676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.614 qpair failed and we were unable to recover it. 00:23:54.614 [2024-07-25 13:52:51.561956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.562014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.562333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.562410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.562710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.562785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.562972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.563030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.563285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.563361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.563678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.563753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.563988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.564046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.564361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.564437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.564697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.564774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.564943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.565002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.565318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.565405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.565604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.565679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.565922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.565979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.566207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.566284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.566579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.566654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.566875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.566933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.567203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.567281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.567529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.567604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.567801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.567864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.568101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.568162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.568417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.568493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.568668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.568726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.568937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.568995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.569325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.569412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.569706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.569780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.570017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.570098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.570363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.570448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.570742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.570819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.571047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.571118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.571385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.571444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.571704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.571780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.572010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.572100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.572383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.572441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.615 qpair failed and we were unable to recover it. 00:23:54.615 [2024-07-25 13:52:51.572738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.615 [2024-07-25 13:52:51.572814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.573105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.573163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.573414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.573491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.573742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.573817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.574070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.574131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.574429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.574512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.574762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.574837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.575041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.575128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.575400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.575476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.575736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.575812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.576086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.576146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.576412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.576488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.576755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.576831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.577114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.577172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.577439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.577515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.577822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.577898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.578169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.578228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.578528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.578603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.578846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.578921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.579171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.579248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.579505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.579581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.579811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.579870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.580107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.580166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.580434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.580510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.580765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.580841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.581151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.581227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.581432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.581508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.581771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.581829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.582126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.582204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.582415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.582494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.582718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.582777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.583047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.583132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.583394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.583471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.583721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.616 [2024-07-25 13:52:51.583796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.616 qpair failed and we were unable to recover it. 00:23:54.616 [2024-07-25 13:52:51.584073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.584132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.584437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.584513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.584814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.584889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.585120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.585180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.585431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.585517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.585722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.585801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.586054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.586124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.586343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.586420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.586629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.586706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.586907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.586965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.587285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.587362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.587578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.587654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.587852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.587911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.588122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.588182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.588406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.588465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.588702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.588760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.588944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.589002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.589246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.589322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.589536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.589615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.589841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.589899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.590199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.590276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.590504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.590581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.590751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.590809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.591003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.591074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.591320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.591398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.591632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.591708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.591943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.592002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.592277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.592355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.592582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.592656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.592877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.592935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.593203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.593280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.593526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.593611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.593881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.593939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.594231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.594308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.617 qpair failed and we were unable to recover it. 00:23:54.617 [2024-07-25 13:52:51.594546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.617 [2024-07-25 13:52:51.594622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.594801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.594862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.595089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.595148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.595422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.595480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.595725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.595801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.596105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.596164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.596458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.596534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.596755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.596815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.597042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.597114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.597388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.597463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.597765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.597822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.598037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.598108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.598347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.598422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.598714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.598791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.598978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.599037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.599360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.599439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.599684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.599759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.600029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.600110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.600374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.600450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.600706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.600782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.601016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.601085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.601393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.601469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.601736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.601811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.602042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.602113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.602415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.602474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.602696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.602754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.603034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.603105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.603346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.603404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.603711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.603787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.604015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.604094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.604300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.604361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.604668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.604744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.604975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.618 [2024-07-25 13:52:51.605033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.618 qpair failed and we were unable to recover it. 00:23:54.618 [2024-07-25 13:52:51.605241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.605299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.605536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.605612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.605811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.605890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.606154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.606231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.606563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.606630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.606906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.606964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.607246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.607323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.607523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.607602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.607870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.607928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.608176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.608253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.608508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.608583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.608840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.608916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.609102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.609158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.619 [2024-07-25 13:52:51.609407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.619 [2024-07-25 13:52:51.609483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.619 qpair failed and we were unable to recover it. 00:23:54.898 [2024-07-25 13:52:51.609738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.898 [2024-07-25 13:52:51.609815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.898 qpair failed and we were unable to recover it. 00:23:54.898 [2024-07-25 13:52:51.610053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.898 [2024-07-25 13:52:51.610125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.898 qpair failed and we were unable to recover it. 00:23:54.898 [2024-07-25 13:52:51.610435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.898 [2024-07-25 13:52:51.610511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.898 qpair failed and we were unable to recover it. 00:23:54.898 [2024-07-25 13:52:51.610761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.898 [2024-07-25 13:52:51.610837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.898 qpair failed and we were unable to recover it. 00:23:54.898 [2024-07-25 13:52:51.611041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.898 [2024-07-25 13:52:51.611114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.898 qpair failed and we were unable to recover it. 00:23:54.898 [2024-07-25 13:52:51.611383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.898 [2024-07-25 13:52:51.611460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.898 qpair failed and we were unable to recover it. 00:23:54.898 [2024-07-25 13:52:51.611727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.611786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.611974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.612032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.612249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.612326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.612589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.612665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.612862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.612919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.613173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.613250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.613507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.613567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.613792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.613851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.614083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.614143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.614370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.614429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.614691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.614750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.615027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.615102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.615386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.615463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.615708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.615784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.616014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.616086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.616365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.616442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.616696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.616771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.617038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.617128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.617446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.617521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.617728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.617804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.618083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.618142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.618417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.618475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.618723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.618798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.619003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.619074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.619344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.619428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.619687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.619763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.619960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.620018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.620331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.620408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.620711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.620787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.621025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.621101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.621363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.621440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.621650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.621728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.621951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.622009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.624163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.624195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.899 qpair failed and we were unable to recover it. 00:23:54.899 [2024-07-25 13:52:51.624387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.899 [2024-07-25 13:52:51.624438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.624633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.624688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.624833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.624859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.625012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.625039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.625262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.625315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.625411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.625437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.625640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.625703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.625832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.625858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.625950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.625976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.626074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.626101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.626253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.626312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.626496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.626546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.626738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.626793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.626929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.626956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.627102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.627128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.627279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.627305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.627466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.627518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.627610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.627636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.627751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.627777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.627891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.627917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.628007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.628033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.628149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.628175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.628287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.628312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.628401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.628426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.628513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.628539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.628658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.628684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.628764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.628789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.628872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.628898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.629043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.629079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.629195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.629221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.629302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.629332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.629456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.629481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.629591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.629617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.629733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.629759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.900 [2024-07-25 13:52:51.629879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.900 [2024-07-25 13:52:51.629905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.900 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.630025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.630051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.630179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.630205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.630316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.630342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.630430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.630456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.630577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.630603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.630690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.630715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.630835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.630861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.631003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.631029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.631141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.631167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.631256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.631282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.631386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.631412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.631500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.631526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.631640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.631666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.631744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.631769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.631861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.631888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.631998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.632024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.632120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.632147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.632225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.632251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.632360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.632386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.632501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.632527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.632636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.632662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.632783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.632809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.632887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.632913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.633025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.633054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.633183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.633210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.633338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.633373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.633541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.633567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.633675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.633700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.633788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.633815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.633957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.633983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.634073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.634100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.634262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.634313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.634428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.901 [2024-07-25 13:52:51.634454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.901 qpair failed and we were unable to recover it. 00:23:54.901 [2024-07-25 13:52:51.634567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.634594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.634691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.634718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.634840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.634877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.634998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.635025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.635209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.635237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.635361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.635388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.635507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.635533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.635613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.635639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.635733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.635759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.635880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.635907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.636020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.636046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.636172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.636199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.636312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.636338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.636433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.636459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.636571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.636598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.636740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.636766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.636850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.636877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.636985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.637011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.637100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.637127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.637233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.637259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.637350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.637376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.637452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.637478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.637593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.637620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.637727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.637754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.637866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.637892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.637988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.638015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.638111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.638138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.638227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.638253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.638337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.638363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.638509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.638535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.638649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.638675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.638818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.638844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.638933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.638960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.639130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.902 [2024-07-25 13:52:51.639158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.902 qpair failed and we were unable to recover it. 00:23:54.902 [2024-07-25 13:52:51.639278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.639305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.639456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.639483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.639625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.639651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.639771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.639797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.639924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.639951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.640078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.640105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.640219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.640246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.640387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.640413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.640531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.640563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.640682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.640709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.640798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.640825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.640933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.640960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.641045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.641090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.641212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.641265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.641365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.641391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.641471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.641498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.641584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.641610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.641687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.641713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.641831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.641856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.641943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.641970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.642105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.642148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.642252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.642280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.642402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.642434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.642558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.642585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.642701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.642727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.642872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.642899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.903 [2024-07-25 13:52:51.643069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.903 [2024-07-25 13:52:51.643125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.903 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.643391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.643457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.643698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.643764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.644123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.644150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.644244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.644271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.644376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.644446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.644636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.644700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.644955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.645022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.645199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.645227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.645417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.645477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.645708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.645769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.646001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.646080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.646265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.646291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.646390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.646417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.646509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.646535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.646725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.646784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.647005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.647084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.647194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.647220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.647357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.647416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.904 qpair failed and we were unable to recover it. 00:23:54.904 [2024-07-25 13:52:51.647615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.904 [2024-07-25 13:52:51.647677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.647923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.647988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.648156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.648185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.648281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.648313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.648495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.648545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.648762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.648814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.648926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.648952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.649049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.649085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.649242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.649306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.649478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.649538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.649712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.649773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.649869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.649896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.649993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.650020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.650140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.650168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.650322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.650348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.650450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.650477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.650619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.650648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.650771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.650798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.650898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.650938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.651027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.651054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.651193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.651250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.651491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.651549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.651823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.651881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.652080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.652106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.905 qpair failed and we were unable to recover it. 00:23:54.905 [2024-07-25 13:52:51.652218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.905 [2024-07-25 13:52:51.652244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.652417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.652478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.652699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.652757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.652995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.653021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.653121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.653148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.653234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.653261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.653378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.653410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.653562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.653620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.653853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.653910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.654113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.654175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.654271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.654298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.654510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.654568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.654742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.654802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.654997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.655054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.655235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.655261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.655380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.655405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.655582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.655608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.655841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.655903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.656168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.656194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.656314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.656360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.656532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.656600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.656806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.656874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.657097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.657139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.657256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.657281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.906 [2024-07-25 13:52:51.657435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.906 [2024-07-25 13:52:51.657461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.906 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.657565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.657610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.657825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.657867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.658050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.658080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.658190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.658216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.658331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.658357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.658467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.658500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.658614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.658639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.658772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.658813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.658941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.658985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.659099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.659127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.659245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.659272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.659375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.659402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.659496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.659574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.659800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.659862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.660122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.660149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.660235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.660261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.660361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.660387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.660511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.660548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.660663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.660690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.660868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.660928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.661138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.661164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.661254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.661280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.661384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.907 [2024-07-25 13:52:51.661410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.907 qpair failed and we were unable to recover it. 00:23:54.907 [2024-07-25 13:52:51.661498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.661524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.661609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.661652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.661802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.661865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.662039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.662091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.662209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.662235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.662381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.662414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.662679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.662740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.662938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.662972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.663096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.663121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.663261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.663286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.663366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.663407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.663580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.663623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.663888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.663921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.664049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.664083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.664190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.664215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.664338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.664363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.664453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.664478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.664629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.664661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.664895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.664928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.665072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.665098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.665221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.665247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.665385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.665411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.665520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.665546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.908 [2024-07-25 13:52:51.665692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.908 [2024-07-25 13:52:51.665718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.908 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.665870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.665902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.665993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.666017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.666115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.666140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.666261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.666287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.666372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.666413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.666605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.666667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.666815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.666854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.666993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.667018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.667146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.667173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.667312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.667337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.667464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.667496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.667660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.667695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.667821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.667856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.667990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.668015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.668116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.668141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.668263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.668299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.668422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.668487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.668667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.668722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.668939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.668976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.669089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.669115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.669271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.669297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.669421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.669447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.669608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.669640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.669744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.669782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.909 [2024-07-25 13:52:51.669888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.909 [2024-07-25 13:52:51.669914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.909 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.670016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.670041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.670145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.670171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.670256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.670282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.670376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.670402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.670568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.670623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.670819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.670876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.671034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.671064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.671139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.671180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.671261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.671287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.671395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.671431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.671581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.671619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.671731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.671781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.671886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.671911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.672011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.672068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.672164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.672192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.672310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.672356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.672503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.672536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.672757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.672824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.673080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.673125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.673231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.673258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.673405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.673441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.673593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.673648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.673887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.673939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.674099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.674125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.910 qpair failed and we were unable to recover it. 00:23:54.910 [2024-07-25 13:52:51.674243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.910 [2024-07-25 13:52:51.674270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.674401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.674430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.674582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.674638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.674877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.674929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.675122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.675148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.675288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.675312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.675424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.675449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.675575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.675602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.675689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.675714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.675793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.675818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.675921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.675946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.676086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.676112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.676224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.676249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.676358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.676386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.676501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.676526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.676663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.676700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.676837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.676866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.677030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.677057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.911 [2024-07-25 13:52:51.677166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.911 [2024-07-25 13:52:51.677192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.911 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.677303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.677329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.677421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.677451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.677533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.677559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.677698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.677733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.677891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.677927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.678083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.678108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.678241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.678267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.678362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.678388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.678466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.678493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.678602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.678628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.678780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.678843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.678993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.679021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.679180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.679207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.679295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.679321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.679471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.679517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.679634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.679682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.679821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.679847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.679996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.680022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.680142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.680167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.680252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.680294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.680417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.680443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.680559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.680584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.680713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.680740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.680832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.680873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.680951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.680977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.681117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.681143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.681248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.681274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.681360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.681408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.681507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.912 [2024-07-25 13:52:51.681544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.912 qpair failed and we were unable to recover it. 00:23:54.912 [2024-07-25 13:52:51.681676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.681709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.681851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.681884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.682017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.682083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.682202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.682228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.682340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.682375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.682497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.682522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.682654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.682688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.682823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.682849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.682955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.682979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.683089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.683138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.683268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.683294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.683396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.683422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.683551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.683591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.683703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.683728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.683840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.683866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.683985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.684024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.684150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.684179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.684269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.684296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.684441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.684487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.684612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.684639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.684821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.684855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.684985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.685029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.685137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.685163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.685246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.685272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.685380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.685408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.685543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.685570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.685684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.685717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.685840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.685867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.685990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.686016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.686123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.686149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.686254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.686279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.913 qpair failed and we were unable to recover it. 00:23:54.913 [2024-07-25 13:52:51.686378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.913 [2024-07-25 13:52:51.686405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.686524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.686552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.686652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.686678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.686784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.686818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.686953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.687000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.687115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.687142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.687258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.687284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.687402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.687428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.687584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.687618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.687819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.687845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.687997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.688023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.688137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.688176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.688280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.688307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.688439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.688466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.688626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.688673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.688773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.688798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.688949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.688974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.689071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.689097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.689182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.689207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.689311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.689337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.689486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.689515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.689656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.689702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.689841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.689867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.689963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.689990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.690111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.690137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.690214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.690239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.690330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.690356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.690444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.690486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.690601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.690636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.690815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.690859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.690975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.691002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.691155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.691182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.691268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.691294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.691390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.691431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.914 [2024-07-25 13:52:51.691532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.914 [2024-07-25 13:52:51.691559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.914 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.691682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.691713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.691858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.691891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.692064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.692091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.692229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.692255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.692332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.692372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.692470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.692496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.692627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.692653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.692812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.692869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.692993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.693021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.693186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.693213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.693292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.693318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.693445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.693491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.693637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.693684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.693801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.693828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.693962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.693988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.694129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.694155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.694242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.694267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.694379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.694411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.694488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.694530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.694644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.694677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.694823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.694850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.694967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.694994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.695135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.695161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.695253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.695279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.695426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.695454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.695597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.695644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.695795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.695821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.695942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.695982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.696117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.696144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.696230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.696256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.696414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.696455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.696572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.696614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.696730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.696756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.696873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.915 [2024-07-25 13:52:51.696900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.915 qpair failed and we were unable to recover it. 00:23:54.915 [2024-07-25 13:52:51.697045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.697102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.697220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.697247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.697334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.697384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.697510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.697538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.697713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.697738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.697827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.697853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.697965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.698000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.698114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.698140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.698251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.698276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.702207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.702249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.702359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.702393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.702528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.702573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.702707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.702750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.702884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.702929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.703019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.703045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.703170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.703210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.703297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.703325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.703425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.703453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.703586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.703614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.703734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.703762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.703859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.703886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.703976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.704004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.704156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.704184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.704279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.704307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.704447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.704494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.704630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.704664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.704889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.704922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.705033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.705083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.916 qpair failed and we were unable to recover it. 00:23:54.916 [2024-07-25 13:52:51.705230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.916 [2024-07-25 13:52:51.705255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.705353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.705380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.705512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.705541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.705706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.705769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.705997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.706030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.706198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.706226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.706342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.706369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.706532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.706582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.706720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.706753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.706879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.706905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.707044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.707081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.707178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.707205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.707389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.707438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.707609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.707655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.707793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.707841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.707931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.707957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.708082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.708110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.708212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.708262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.708432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.708482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.708612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.708659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.708793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.708818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.708973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.708999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.709131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.709175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.709260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.709286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.709464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.709489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.709613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.709638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.709795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.709821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.709916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.709941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.710043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.710108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.710269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.710325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.710435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.710464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.710661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.710712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.710876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.710969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.711168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.917 [2024-07-25 13:52:51.711213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.917 qpair failed and we were unable to recover it. 00:23:54.917 [2024-07-25 13:52:51.711424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.711459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.711580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.711609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.711732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.711760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.711996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.712069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.712207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.712234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.712355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.712382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.712560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.712623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.712868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.712902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.713022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.713049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.713202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.713229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.713347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.713382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.713480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.713512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.713618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.713653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.713846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.713880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.714033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.714065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.714156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.714182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.714264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.714291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.714422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.714449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.714590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.714624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.714793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.714827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.715040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.715108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.715228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.715255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.715407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.715434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.715514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.715541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.715656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.715700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.715887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.715921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.716088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.716137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.716222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.716249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.716403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.716429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.716515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.716542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.716652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.716686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.716823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.716858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.717041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.717087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.717251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.717293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.918 qpair failed and we were unable to recover it. 00:23:54.918 [2024-07-25 13:52:51.717380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.918 [2024-07-25 13:52:51.717411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.717549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.717578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.717701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.717734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.717882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.717915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.718082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.718126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.718214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.718241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.718337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.718365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.718471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.718499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.718639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.718691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.718778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.718805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.718906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.718932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.719019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.719045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.719209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.719246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.719343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.719370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.719492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.719517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.719601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.719626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.719720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.719745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.719859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.719889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.719967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.719993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.720119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.720146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.720258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.720283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.720403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.720428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.720541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.720566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.720649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.720674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.720765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.720793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.720880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.720921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.721008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.721034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.721176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.721219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.721359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.721385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.721502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.721530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.721613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.721638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.721784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.721811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.721923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.721950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.722093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.722119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.919 [2024-07-25 13:52:51.722201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.919 [2024-07-25 13:52:51.722227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.919 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.722313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.722338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.722486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.722512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.722594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.722619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.722730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.722756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.722872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.722897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.722979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.723004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.723104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.723143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.723257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.723285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.723414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.723451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.723554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.723589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.723716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.723745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.723835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.723863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.724001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.724029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.724158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.724183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.724300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.724325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.724410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.724435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.724551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.724576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.724687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.724724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.724838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.724864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.725009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.725034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.725164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.725189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.725275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.725300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.725422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.725447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.725571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.725596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.725709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.725734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.725832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.725870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.726021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.726048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.726149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.726175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.726306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.726335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.726467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.726511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.726622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.726648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.726789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.726816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.726929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.726954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.727075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.920 [2024-07-25 13:52:51.727101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.920 qpair failed and we were unable to recover it. 00:23:54.920 [2024-07-25 13:52:51.727214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.727241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.727325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.727350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.727467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.727496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.727608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.727634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.727742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.727766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.727847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.727872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.727954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.727981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.728125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.728150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.728249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.728275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.728379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.728408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.728510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.728537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.728693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.728734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.728869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.728895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.729008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.729033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.729185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.729211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.729323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.729347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.729439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.729464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.729562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.729587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.729686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.729715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.729801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.729828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.729925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.729967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.730075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.730103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.730218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.730245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.730383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.730425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.730514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.730540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.730686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.730711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.730806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.730831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.730937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.730962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.731092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.731136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.731245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.731276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.731420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.731463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.731580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.731623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.921 [2024-07-25 13:52:51.731737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.921 [2024-07-25 13:52:51.731764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.921 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.731878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.731904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.732047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.732081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.732166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.732192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.732310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.732336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.732442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.732487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.732572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.732600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.732724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.732753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.732871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.732899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.733046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.733087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.733210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.733243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.733336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.733362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.733542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.733595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.733686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.733713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.733848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.733874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.733988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.734014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.734158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.734200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.734307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.734351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.734508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.734541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.734654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.734687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.734824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.734851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.734992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.735018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.735111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.735138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.735258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.735285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.735397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.735424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.735535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.735561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.735694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.735720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.735811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.735837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.735945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.735970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.736102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.736128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.736221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.922 [2024-07-25 13:52:51.736247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.922 qpair failed and we were unable to recover it. 00:23:54.922 [2024-07-25 13:52:51.736335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.736361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.736513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.736539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.736668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.736694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.736839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.736866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.736978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.737004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.737129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.737158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.737324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.737365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.737495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.737544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.737698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.737746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.737872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.737898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.737978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.738004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.738131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.738158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.738270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.738298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.738432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.738461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.738562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.738590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.738690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.738718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.738854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.738880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.739001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.739027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.739126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.739153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.739261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.739289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.739481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.739523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.739683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.739712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.739817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.739868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.740019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.740047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.740244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.740271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.740468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.740519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.740709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.740759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.740912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.740940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.741035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.741094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.741211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.741238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.741397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.741454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.741599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.741645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.741744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.741772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.741891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.741924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.742065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.923 [2024-07-25 13:52:51.742110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.923 qpair failed and we were unable to recover it. 00:23:54.923 [2024-07-25 13:52:51.742225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.742252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.742388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.742416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.742519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.742547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.742666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.742695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.742810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.742856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.742993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.743020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.743145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.743172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.743284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.743311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.743427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.743453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.743595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.743621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.743743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.743770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.743891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.743917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.744039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.744070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.744161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.744187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.744280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.744305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.744417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.744443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.744544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.744572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.744691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.744719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.744861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.744904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.745044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.745078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.745197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.745223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.745330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.745359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.745490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.745516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.745635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.745661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.745777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.745803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.745906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.745937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.746056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.746105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.746258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.746287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.746442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.746479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.746604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.746658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.746821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.746855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.747040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.747072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.747195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.747222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.747308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.924 [2024-07-25 13:52:51.747353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.924 qpair failed and we were unable to recover it. 00:23:54.924 [2024-07-25 13:52:51.747480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.747509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.747685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.747722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.747844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.747891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.748028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.748055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.748161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.748194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.748287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.748314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.748432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.748459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.748541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.748568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.748680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.748726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.748892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.748929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.749042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.749106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.749249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.749276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.749386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.749431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.749599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.749635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.749799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.749835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.749975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.750001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.750145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.750172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.750314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.750341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.750490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.750517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.750638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.750666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.750784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.750812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.750932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.750976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.751149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.751176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.751293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.751320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.751476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.751503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.925 qpair failed and we were unable to recover it. 00:23:54.925 [2024-07-25 13:52:51.751648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.925 [2024-07-25 13:52:51.751674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.751819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.751846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.751953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.751999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.752106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.752133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.752214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.752241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.752394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.752421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.752568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.752611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.752736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.752790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.753043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.753083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.753214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.753241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.753391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.753418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.753518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.753546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.753644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.753673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.753834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.753878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.754038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.754077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.754174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.754202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.754341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.754370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.754577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.754631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.754779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.754830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.754917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.754947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.755106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.755135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.755262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.755306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.755417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.755460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.755580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.926 [2024-07-25 13:52:51.755609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.926 qpair failed and we were unable to recover it. 00:23:54.926 [2024-07-25 13:52:51.755733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.755760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.755848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.755875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.755991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.756018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.756116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.756143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.756255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.756282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.756408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.756454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.756632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.756676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.756862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.756905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.757065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.757094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.757222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.757249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.757390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.757433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.757575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.757624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.757820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.757853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.757987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.758013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.758109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.758136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.758243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.758271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.758405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.758432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.758520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.758546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.758639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.758666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.758791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.758817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.758895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.927 [2024-07-25 13:52:51.758921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.927 qpair failed and we were unable to recover it. 00:23:54.927 [2024-07-25 13:52:51.759076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.759103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.759204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.759243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.759384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.759412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.759552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.759586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.759730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.759756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.759866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.759895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.760031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.760102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.760267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.760299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.760438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.760467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.760561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.760589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.760741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.760770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.760885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.760934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.761037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.761071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.761192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.761220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.761361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.761417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.761528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.761581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.761728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.761764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.761878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.761907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.762035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.762085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.762236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.762264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.762412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.928 [2024-07-25 13:52:51.762443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.928 qpair failed and we were unable to recover it. 00:23:54.928 [2024-07-25 13:52:51.762568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.762597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.762693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.762722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.762847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.762876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.762979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.763005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.763098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.763126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.763214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.763242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.763323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.763354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.763490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.763534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.763709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.763762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.763897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.763931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.764111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.764150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.764269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.764296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.764429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.764471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.764625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.764674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.764781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.764822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.764955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.764999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.765110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.765155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.765251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.765278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.765392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.765419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.765509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.765557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.765719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.765771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.765934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.765964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.766093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.766123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.929 qpair failed and we were unable to recover it. 00:23:54.929 [2024-07-25 13:52:51.766242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.929 [2024-07-25 13:52:51.766269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.766401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.766431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.766606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.766641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.766783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.766817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.766919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.766952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.767077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.767122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.767265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.767292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.767409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.767452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.767645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.767687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.767823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.767875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.768075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.768124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.768252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.768279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.768402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.768428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.768524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.768551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.768670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.768700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.768843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.768888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.769017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.769045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.769146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.769172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.769260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.769286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.769381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.769407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.769542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.769570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.769725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.769772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.769861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.769888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.769989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.930 [2024-07-25 13:52:51.770017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.930 qpair failed and we were unable to recover it. 00:23:54.930 [2024-07-25 13:52:51.770156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.770188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.770285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.770310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.770444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.770473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.770600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.770643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.770766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.770793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.770908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.770936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.771040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.771073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.771183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.771209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.771359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.771403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.771598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.771628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.771761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.771790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.771913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.771943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.772046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.772080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.772199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.772226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.772351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.772396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.772527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.772552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.772642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.772668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.772770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.772798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.772922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.772950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.773120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.773147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.773238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.773264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.773349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.773375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.773491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.773517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.773625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.931 [2024-07-25 13:52:51.773653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.931 qpair failed and we were unable to recover it. 00:23:54.931 [2024-07-25 13:52:51.773803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.773831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.773958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.773986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.774083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.774111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.774221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.774251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.774347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.774373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.774463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.774491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.774678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.774706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.774819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.774847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.774962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.775005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.775164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.775194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.775311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.775356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.775509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.775543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.775679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.775723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.775930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.775994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.776190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.776217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.776323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.776353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.776476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.776504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.776714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.776747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.776890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.776923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.777113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.777143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.777260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.777289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.777446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.777475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.777633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.777661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.777813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.777866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.932 [2024-07-25 13:52:51.777991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.932 [2024-07-25 13:52:51.778018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.932 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.778152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.778180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.778276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.778302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.778458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.778514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.778682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.778731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.778854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.778882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.779011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.779043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.779145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.779173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.779293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.779321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.779409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.779436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.779571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.779599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.779721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.779749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.779867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.779895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.780018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.780068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.780172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.780202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.780327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.780357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.780480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.780509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.780607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.780637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.780767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.780795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.780889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.780918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.781025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.781056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.781196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.781225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.781342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.781371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.781537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.781577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.781730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.781771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.781889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.781929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.933 qpair failed and we were unable to recover it. 00:23:54.933 [2024-07-25 13:52:51.782096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.933 [2024-07-25 13:52:51.782125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.782210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.782239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.782365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.782394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.782604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.782637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.782777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.782810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.782908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.782943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.783130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.783160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.783301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.783330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.783515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.783567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.783686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.783736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.783824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.783852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.783937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.783965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.784089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.784118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.784230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.784258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.784354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.784382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.784508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.784536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.784661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.784689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.784818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.784847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.784965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.784994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.785132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.785162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.785254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.785288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.785440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.785474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.785626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.785666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.785869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.785910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.786071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.786101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.786252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.786280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.786404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.786432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.934 [2024-07-25 13:52:51.786681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.934 [2024-07-25 13:52:51.786722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.934 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.786895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.786936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.787065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.787116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.787239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.787268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.787396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.787425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.787515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.787544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.787672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.787721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.787887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.787921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.788065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.788093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.788209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.788237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.788355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.788403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.788526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.788554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.788668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.788696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.788782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.788809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.788936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.788964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.789054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.789095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.789188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.789216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.789340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.789368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.789532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.789563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.789697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.789726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.789879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.789913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.790044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.790080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.790210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.790240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.790366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.790395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.790494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.790523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.790675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.790724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.790906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.790959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.791083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.791111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.791268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.791318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.791475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.791519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.935 qpair failed and we were unable to recover it. 00:23:54.935 [2024-07-25 13:52:51.791671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.935 [2024-07-25 13:52:51.791718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.791820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.791848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.791987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.792014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.792201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.792268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.792493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.792557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.792775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.792816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.792970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.792999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.793123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.793152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.793273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.793301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.793514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.793579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.793776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.793816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.793971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.794011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.794168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.794198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.794323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.794350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.794441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.794468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.794653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.794691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.794852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.794890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.795072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.795136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.795263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.795292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.795405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.795434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.795565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.795592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.795715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.795756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.795889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.795930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.796119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.796163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.796301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.796331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.796417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.796447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.796650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.796693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.796874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.796914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.797078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.797134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.797288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.936 [2024-07-25 13:52:51.797317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.936 qpair failed and we were unable to recover it. 00:23:54.936 [2024-07-25 13:52:51.797453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.797487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.797609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.797637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.797846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.797886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.798021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.798086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.798239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.798267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.798390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.798431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.798603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.798643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.798803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.798843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.799043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.799111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.799244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.799273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.799398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.799445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.799647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.799687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.799884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.799924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.800071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.800121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.800246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.800274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.800395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.800424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.800546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.800594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.800792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.800832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.800967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.800995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.801097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.801126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.801251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.801281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.801431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.801460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.801563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.937 [2024-07-25 13:52:51.801615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.937 qpair failed and we were unable to recover it. 00:23:54.937 [2024-07-25 13:52:51.801813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.801854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.802040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.802074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.802231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.802260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.802393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.802434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.802584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.802619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.802749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.802778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.802945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.802985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.803124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.803154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.803284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.803313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.803483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.803524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.803648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.803700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.803868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.803909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.804082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.804133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.804256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.804285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.804453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.804494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.804653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.804693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.804887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.804928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.805141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.805171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.805268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.805298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.805418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.805446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.805570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.805620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.805784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.805824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.806020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.806077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.806191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.938 [2024-07-25 13:52:51.806221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.938 qpair failed and we were unable to recover it. 00:23:54.938 [2024-07-25 13:52:51.806321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.806371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.806549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.806590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.806757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.806797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.806959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.807023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.807204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.807233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.807362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.807417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.807611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.807652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.807829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.807870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.808080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.808127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.808258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.808288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.808414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.808447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.808625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.808666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.808795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.808836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.809022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.809050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.809200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.809229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.809335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.809363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.809521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.809549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.809642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.809671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.809794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.809822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.810010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.810050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.810232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.810281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.810450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.810491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.810660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.810701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.939 [2024-07-25 13:52:51.810837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.939 [2024-07-25 13:52:51.810879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.939 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.811047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.811099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.811302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.811343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.811511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.811551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.811678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.811720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.811861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.811907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.812033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.812085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.812218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.812259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.812415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.812456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.812630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.812671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.812800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.812840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.813041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.813092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.813228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.813269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.813388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.813429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.813631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.813672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.813832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.813872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.814030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.814101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.814245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.814287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.814450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.814490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.814690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.814730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.814901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.814942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.815108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.815150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.815318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.815360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.815485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.815531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.940 [2024-07-25 13:52:51.815736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.940 [2024-07-25 13:52:51.815777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.940 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.815903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.815944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.816094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.816136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.816307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.816348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.816546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.816587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.816764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.816804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.816969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.817010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.817166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.817208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.817406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.817446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.817645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.817710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.817901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.817943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.818106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.818148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.818273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.818314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.818476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.818523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.818658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.818699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.818864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.818905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.819030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.819081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.819250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.819292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.819491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.819532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.819726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.819766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.941 qpair failed and we were unable to recover it. 00:23:54.941 [2024-07-25 13:52:51.819891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.941 [2024-07-25 13:52:51.819932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.820103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.820145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.820351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.820391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.820514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.820555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.820686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.820726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.820924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.820965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.821132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.821173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.821379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.821419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.821572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.821613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.821733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.821776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.821928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.821969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.822093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.822135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.822290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.822331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.822489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.822529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.822689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.822729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.822903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.822967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.823140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.823235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.823361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.823429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.823608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.823671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.823937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.824001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.824301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.824366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.824521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.824562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.824727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.824768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.825015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.825096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.825315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.825380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.825593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.825658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.942 [2024-07-25 13:52:51.825913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.942 [2024-07-25 13:52:51.825977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.942 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.826242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.826308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.826512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.826578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.826750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.826816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.827123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.827165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.827335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.827376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.827576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.827619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.827759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.827808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.827978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.828022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.828160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.828205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.828381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.828424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.828594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.828638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.828844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.828886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.829095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.829139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.829344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.829387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.829552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.829595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.829770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.829813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.830030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.830083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.830218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.830260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.830408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.830451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.830621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.830665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.830876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.830919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.831056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.831109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.831244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.831287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.831503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.831546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.831714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.831757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.831921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.831964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.832107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.943 [2024-07-25 13:52:51.832151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.943 qpair failed and we were unable to recover it. 00:23:54.943 [2024-07-25 13:52:51.832357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.832400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.832541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.832585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.832720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.832764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.832976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.833019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.833208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.833252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.833456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.833499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.833683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.833725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.833862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.833905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.834087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.834132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.834276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.834318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.834486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.834528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.834691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.834734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.834884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.834928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.835135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.835180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.835361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.835404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.835576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.835619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.835764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.835807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.835975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.836018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.836173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.836216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.836389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.836439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.836593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.836635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.836754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.836796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.836970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.837014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.837209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.837252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.837432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.837474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.837658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.837701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.837869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.837912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.838097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.838141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.838347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.838392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.944 qpair failed and we were unable to recover it. 00:23:54.944 [2024-07-25 13:52:51.838594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.944 [2024-07-25 13:52:51.838637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.838814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.838858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.839024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.839084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.839256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.839299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.839448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.839492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.839699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.839741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.839908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.839951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.840116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.840160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.840288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.840330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.840475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.840519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.840684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.840728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.840901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.840944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.841152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.841196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.841371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.841415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.841637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.841680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.841819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.841862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.842020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.842075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.842255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.842298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.842467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.842510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.842726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.842769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.842901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.842944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.843176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.843250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.843413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.843460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.843639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.843683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.843828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.843903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.844093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.844137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.844276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.844318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.844484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.844526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.844732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.844775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.844910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.844952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.845128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.845180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.845322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.845364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.845574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.845615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.845803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.845845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.845995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.846039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.945 [2024-07-25 13:52:51.846200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.945 [2024-07-25 13:52:51.846245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.945 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.846414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.846457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.846626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.846669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.846832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.846873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.847076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.847142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.847335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.847381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.847556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.847600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.847745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.847790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.847954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.847999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.848208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.848253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.848427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.848470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.848639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.848683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.848859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.848902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.849041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.849113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.849306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.849352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.849570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.849616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.849797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.849844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.850082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.850129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.850257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.850303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.850522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.850568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.850779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.850825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.850960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.851007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.851185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.851232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.851447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.851493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.851667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.851711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.851913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.851984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.852129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.852173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.852344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.852388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.852562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.852606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.852739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.852783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.852964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.853008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.853183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.853229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.853443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.853486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.853674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.853733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.853887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.853930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.854096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.854148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.854320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.854364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.854550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.854593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.854807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.854851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.855100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.855147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.855365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.946 [2024-07-25 13:52:51.855410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.946 qpair failed and we were unable to recover it. 00:23:54.946 [2024-07-25 13:52:51.855640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.855686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.855868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.855914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.856089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.856135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.856324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.856369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.856549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.856595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.856756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.856801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.857013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.857069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.857217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.857262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.857446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.857492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.857639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.857684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.857834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.857887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.858107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.858155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.858342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.858389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.858533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.858578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.858730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.858776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.859001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.859048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.859215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.859261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.859471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.859517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.859726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.859786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.859984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.860029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.860174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.860220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.860372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.860418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.860558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.860605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.860828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.860875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.861044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.861099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.861259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.861305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.861477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.861523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.861702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.861746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.861938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.861983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.862184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.862230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.862448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.862493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.862733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.862793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.863027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.863129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.947 qpair failed and we were unable to recover it. 00:23:54.947 [2024-07-25 13:52:51.863357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.947 [2024-07-25 13:52:51.863403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.863618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.863671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.863823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.863869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.864009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.864054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.864254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.864300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.864489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.864535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.864724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.864771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.864986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.865032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.865234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.865281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.865497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.865543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.865683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.865757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.865993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.866039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.866301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.866378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.866609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.866688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.866951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.867012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.867317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.867396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.867673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.867751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.867977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.868039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.868295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.868356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.868625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.868685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.868957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.869017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.869295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.869362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.869639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.869717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.870002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.870080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.870342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.870421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.870679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.870739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.870978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.871039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.871345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.871422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.871666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.871744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.871987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.872046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.872301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.872379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.872668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.872746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.872972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.873034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.873286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.873365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.873600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.873679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.873917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.873977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.874225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.874304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.874538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.874615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.874825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.874902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.875135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.875196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.875414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.875459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.875631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.948 [2024-07-25 13:52:51.875684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.948 qpair failed and we were unable to recover it. 00:23:54.948 [2024-07-25 13:52:51.875904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.875950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.876169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.876215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.876472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.876521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.876701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.876750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.876943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.876992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.877207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.877256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.877450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.877499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.877706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.877756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.877940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.878001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.878295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.878354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.878632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.878692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.878936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.878999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.879267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.879346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.879598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.879647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.879823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.879873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.880026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.880087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.880315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.880364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.880593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.880641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.880894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.880945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.881130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.881185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.881397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.881450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.881671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.881723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.881918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.881978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.882247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.882299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.882589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.882667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.882897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.882957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.883286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.883378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.883633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.883710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.884003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.884076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.884344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.884395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.884606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.884658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.884898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.884957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.885198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.885278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.885537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.885615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.885867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.885925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.886184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.886263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.886470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.886532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.886819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.886879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.887079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.887156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.887394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.887454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.887650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.887701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.887877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.887929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.888132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.949 [2024-07-25 13:52:51.888185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.949 qpair failed and we were unable to recover it. 00:23:54.949 [2024-07-25 13:52:51.888430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.888481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.888691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.888744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.888908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.888960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.889204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.889256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.889408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.889462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.889669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.889721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.889908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.889960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.890140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.890195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.890428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.890480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.890679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.890731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.890976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.891048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.891341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.891394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.891644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.891696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.891842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.891893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.892074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.892128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.892325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.892378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.892580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.892630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.892870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.892921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.893107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.893161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.893389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.893444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.893703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.893758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.893935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.893990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.894174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.894232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.894496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.894552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.894723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.894779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.895030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.895102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.895366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.895422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.895640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.895696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.895910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.895965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.896200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.896258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.896472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.896528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.896703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.896758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.897008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.897096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.897363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.897420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.897606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.897662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.897873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.897929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.898140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.898207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.898427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.898483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.898709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.898765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.898939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.898995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.899266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.899325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.899550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.899607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.899790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.899846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.900017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.900083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.900289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.950 [2024-07-25 13:52:51.900345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.950 qpair failed and we were unable to recover it. 00:23:54.950 [2024-07-25 13:52:51.900613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.900668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.900928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.900984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.901279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.901336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.901610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.901666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.901928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.901984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.902313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.902392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.902707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.902785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.903074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.903131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.903386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.903442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.903695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.903751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.903946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.904005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.904250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.904306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.904516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.904573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.904842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.904897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.905153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.905210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.905421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.905477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.905704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.905760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.905975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.906032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.906326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.906389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.906622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.906682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.906911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.906972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.907232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.907296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.907502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.907563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.907804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.907865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.908136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.908196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.908470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.908530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.908758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.908817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.908991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.909050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.909372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.909433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.909654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.909715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.909990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.910050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.910342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.910410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.910597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.910658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.910856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.910915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.911150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.911212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.911480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.911539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.911772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.911833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.912125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.912185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:54.951 [2024-07-25 13:52:51.912434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.951 [2024-07-25 13:52:51.912493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:54.951 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.912690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.912751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.913028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.913100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.913307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.913367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.913626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.913686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.913954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.914014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.914226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.914285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.914562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.914622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.914856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.914916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.915149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.915210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.915437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.915496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.915779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.915839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.916083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.916144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.916364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.916423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.916698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.916757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.227 [2024-07-25 13:52:51.917029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.227 [2024-07-25 13:52:51.917111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.227 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.917338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.917398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.917635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.917695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.917916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.917975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.918221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.918281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.918513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.918573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.918807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.918868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.919097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.919160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.919432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.919492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.919760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.919819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.920013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.920085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.920317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.920377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.920648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.920707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.920889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.920949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.921166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.921227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.921512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.921571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.921802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.921861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.922129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.922189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.922458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.922527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.922803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.922862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.923133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.923194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.923395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.923454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.923704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.923781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.924030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.924104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.924347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.924407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.924635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.924694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.924920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.924980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.925227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.925287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.925591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.925651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.925922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.925981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.926194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.926274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.926568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.926646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.926881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.926942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.927228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.927307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.228 [2024-07-25 13:52:51.927553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.228 [2024-07-25 13:52:51.927616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.228 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.927849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.927909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.928175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.928253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.928512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.928588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.928829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.928888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.929092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.929152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.929368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.929447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.929755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.929833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.930128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.930209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.930396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.930457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.930750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.930810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.931039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.931118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.931427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.931504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.931801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.931878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.932072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.932134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.932440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.932518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.932808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.932886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.933151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.933231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.933425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.933505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.933716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.933795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.934085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.934146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.934360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.934439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.934658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.934738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.934980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.935040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.935342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.935403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.935658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.935736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.935975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.936036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.936315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.936394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.936687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.936765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.936942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.937001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.937334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.937412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.937592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.937653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.937899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.937959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.938196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.938275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.938531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.229 [2024-07-25 13:52:51.938608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.229 qpair failed and we were unable to recover it. 00:23:55.229 [2024-07-25 13:52:51.938841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.938901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.939191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.939270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.939577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.939654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.939938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.939998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.940262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.940339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.940590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.940669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.940893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.940954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.941206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.941285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.941496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.941575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.941849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.941909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.942092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.942152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.942364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.942444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.942762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.942839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.943111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.943172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.943482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.943560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.943808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.943885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.944184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.944270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.944505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.944567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.944822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.944900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.945163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.945242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.945457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.945537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.945817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.945877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.946111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.946171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.946439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.946499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.946698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.946775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.947024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.947100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.947375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.947453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.947748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.947825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.948073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.948134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.948424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.948502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.948709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.948792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.949078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.949138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.949451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.949530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.949821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.949898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.230 qpair failed and we were unable to recover it. 00:23:55.230 [2024-07-25 13:52:51.950171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.230 [2024-07-25 13:52:51.950232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.950537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.950614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.950877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.950953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.951247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.951325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.951578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.951655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.951886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.951946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.952220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.952299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.952592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.952670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.952908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.952969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.953199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.953284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.953515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.953593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.953867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.953928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.954287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.954373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.954648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.954726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.955006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.955093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.955369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.955429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.955692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.955751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.956025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.956099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.956359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.956435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.956708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.956768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.956995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.957057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.957374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.957452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.957710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.957796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.958087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.958148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.958410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.958488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.958771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.958833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.959037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.959130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.959405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.959466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.959736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.959795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.960037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.960114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.960347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.960409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.960626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.960686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.960897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.960957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.961266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.231 [2024-07-25 13:52:51.961345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.231 qpair failed and we were unable to recover it. 00:23:55.231 [2024-07-25 13:52:51.961643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.961720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.961998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.962057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.962376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.962454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.962755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.962832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.963104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.963166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.963430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.963508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.963754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.963831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.964080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.964143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.964445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.964522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.964791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.964852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.965032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.965106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.965367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.965444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.965717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.965795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.965974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.966035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.966304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.966380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.966658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.966736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.967001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.967077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.967397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.967475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.967743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.967820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.968103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.968164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.968433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.968493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.968765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.968825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.969056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.969125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.969376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.969452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.969724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.969801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.970034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.970105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.970339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.970400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.232 qpair failed and we were unable to recover it. 00:23:55.232 [2024-07-25 13:52:51.970664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.232 [2024-07-25 13:52:51.970742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.970979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.971049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.971343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.971420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.971692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.971751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.971997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.972057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.972377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.972453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.972667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.972744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.972966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.973026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.973250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.973311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.973588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.973665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.973852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.973911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.974180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.974241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.974480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.974539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.974832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.974908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.975209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.975287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.975584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.975663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.975886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.975947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.976236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.976315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.976616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.976694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.976922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.976982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.977297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.977374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.977675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.977752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.978024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.978102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.978411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.978488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.978741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.978819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.979018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.979111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.979369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.979450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.979753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.979832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.980085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.980148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.980370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.980448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.980741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.980820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.981051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.981127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.981429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.981507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.981766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.981842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.233 qpair failed and we were unable to recover it. 00:23:55.233 [2024-07-25 13:52:51.982113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.233 [2024-07-25 13:52:51.982174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.982417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.982493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.982754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.982831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.983073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.983134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.983413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.983473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.983720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.983797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.984072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.984132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.984365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.984436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.984696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.984774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.984983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.985044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.985298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.985357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.985521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.985581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.985845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.985924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.986191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.986269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.986541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.986618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.986895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.986956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.987282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.987360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.987558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.987619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.987827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.987886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.988161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.988227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.988433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.988493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.988736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.988796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.988986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.989046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.989293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.989353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.989567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.989626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.989865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.989925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.990138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.990217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.990469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.990546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.990815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.990875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.991102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.991163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.991425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.991502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.991681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.991741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.991914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.991976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.992239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.992319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.234 [2024-07-25 13:52:51.992570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.234 [2024-07-25 13:52:51.992648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.234 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.992849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.992910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.993167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.993246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.993502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.993582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.993814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.993875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.994087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.994147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.994368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.994428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.994600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.994658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.994932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.994992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.995229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.995291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.995490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.995549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.995780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.995842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.996117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.996179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.996454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.996524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.996857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.996918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.997195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.997274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.997536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.997613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.997884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.997944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.998201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.998279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.998464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.998525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.998754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.998814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.999083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.999143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.999399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.999476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:51.999758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:51.999835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.000106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.000167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.000392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.000468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.000691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.000767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.001006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.001079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.001348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.001428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.001620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.001701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.001971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.002030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.002315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.002394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.002656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.002733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.002965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.003024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.003378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.235 [2024-07-25 13:52:52.003457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.235 qpair failed and we were unable to recover it. 00:23:55.235 [2024-07-25 13:52:52.003761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.003838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.004114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.004175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.004396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.004476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.004732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.004811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.005056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.005130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.005392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.005469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.005703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.005780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.006011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.006090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.006368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.006446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.006650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.006728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.006996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.007056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.007334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.007411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.007658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.007736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.007972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.008033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.008304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.008370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.008631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.008709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.008993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.009053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.009337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.009414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.009674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.009762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.009956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.010017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.010293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.010371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.010634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.010711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.010941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.011003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.011281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.011360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.011637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.011699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.011929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.011989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.012258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.012335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.012554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.012634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.012900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.012959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.013246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.013325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.013624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.013702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.013930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.013990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.014283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.014362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.236 qpair failed and we were unable to recover it. 00:23:55.236 [2024-07-25 13:52:52.014599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.236 [2024-07-25 13:52:52.014677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.014907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.014966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.015258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.015319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.015551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.015628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.015893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.015953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.016139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.016203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.016463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.016541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.016775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.016834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.017020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.017096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.017398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.017477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.017748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.017808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.017997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.018057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.018345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.018424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.018687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.018747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.018947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.019007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.019224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.019303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.019551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.019629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.019855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.019915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.020161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.020240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.020469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.020531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.020767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.020827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.021073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.021134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.021424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.021504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.237 qpair failed and we were unable to recover it. 00:23:55.237 [2024-07-25 13:52:52.021714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.237 [2024-07-25 13:52:52.021790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.021973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.022032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.022273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.022343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.022604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.022664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.022894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.022956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.023207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.023286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.023564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.023641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.023881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.023941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.024115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.024176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.024490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.024568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.024845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.024906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.025152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.025233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.025481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.025561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.025787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.025847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.026139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.026217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.026454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.026532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.026817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.026877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.027076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.027136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.027365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.238 [2024-07-25 13:52:52.027424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.238 qpair failed and we were unable to recover it. 00:23:55.238 [2024-07-25 13:52:52.027696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.027755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.027979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.028040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.028329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.028390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.028689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.028767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.029045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.029121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.029340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.029424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.029697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.029775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.030003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.030078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.030301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.030360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.030579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.030662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.030936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.031038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.031349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.031435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.031733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.031800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.032055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.032130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.032393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.032460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.032708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.032771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.033105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.033166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.033359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.033417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.033660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.033726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.034014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.034111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.034362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.034422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.034718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.034804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.035070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.035149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.035392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.239 [2024-07-25 13:52:52.035468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.239 qpair failed and we were unable to recover it. 00:23:55.239 [2024-07-25 13:52:52.035784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.035863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.036126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.036197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.036435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.036497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.036700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.036766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.037093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.037154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.037417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.037480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.037783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.037851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.038145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.038204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.038392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.038473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.038792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.038873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.039141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.039201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.039390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.039450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.039759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.039822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.040157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.040218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.040521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.040585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.040872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.040935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.041223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.041283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.041581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.041644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.041932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.041995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.042259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.042320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.042555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.042618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.042903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.042965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.043207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.043266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.240 [2024-07-25 13:52:52.043562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.240 [2024-07-25 13:52:52.043627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.240 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.043858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.043924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.044171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.044232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.044517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.044592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.044855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.044918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.045182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.045243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.045547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.045610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.045864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.045928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.046192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.046252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.046470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.046532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.046782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.046857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.047115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.047177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.047415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.047478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.047676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.047739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.048087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.048168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.048379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.048459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.048679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.048752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.049057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.049148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.049371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.049435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.049621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.049687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.049950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.050012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.050288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.050354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.050613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.050678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.050937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.241 [2024-07-25 13:52:52.051000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.241 qpair failed and we were unable to recover it. 00:23:55.241 [2024-07-25 13:52:52.051320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.051401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.051651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.051717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.051938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.052001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.052259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.052323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.052587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.052665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.052922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.052986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.053235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.053299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.053507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.053573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.053780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.053844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.054096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.054163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.054453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.054516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.054734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.054799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.055089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.055155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.055412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.055474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.055773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.055836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.056110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.056183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.056421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.056485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.056734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.056796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.057052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.057129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.057368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.057431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.057678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.057741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.057987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.058088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.058346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.242 [2024-07-25 13:52:52.058411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.242 qpair failed and we were unable to recover it. 00:23:55.242 [2024-07-25 13:52:52.058701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.058764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.058975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.059040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.059382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.059447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.059705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.059767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.060049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.060133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.060446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.060510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.060732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.060794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.061052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.061127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.061374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.061449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.061701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.061774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.061982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.062047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.062325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.062388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.062648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.062712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.062996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.063074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.063364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.063427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.063696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.063762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.064018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.064111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.064408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.064470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.064722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.064787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.064987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.065051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.065259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.065334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.065597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.065661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.065875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.065937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.066225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.243 [2024-07-25 13:52:52.066290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.243 qpair failed and we were unable to recover it. 00:23:55.243 [2024-07-25 13:52:52.066563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.066627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.066870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.066933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.067186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.067251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.067456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.067519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.067766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.067829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.068101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.068165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.068449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.068514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.068736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.068800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.069042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.069117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.069397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.069461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.069655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.069719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.069966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.070029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.070311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.070379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.070578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.070640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.070835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.070900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.071149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.071215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.071512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.071575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.071829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.071892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.072157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.072224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.072434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.072497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.072786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.072849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.073049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.073125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.073362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.073425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.073680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.244 [2024-07-25 13:52:52.073742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.244 qpair failed and we were unable to recover it. 00:23:55.244 [2024-07-25 13:52:52.074000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.074075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.074285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.074357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.074657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.074720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.074921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.074984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.075249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.075313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.075554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.075617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.075874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.075936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.076203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.076268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.076561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.076624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.076869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.076934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.077196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.077262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.077563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.077627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.077844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.077906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.078146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.078211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.078424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.078490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.078738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.078801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.079088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.079153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.079401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.245 [2024-07-25 13:52:52.079466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.245 qpair failed and we were unable to recover it. 00:23:55.245 [2024-07-25 13:52:52.079651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.079715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.079959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.080022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.080241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.080304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.080489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.080553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.080769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.080834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.081146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.081209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.081457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.081519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.081771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.081834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.082100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.082164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.082376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.082438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.082692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.082756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.082990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.083053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.083282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.083345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.083630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.083693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.083984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.084047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.084328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.084393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.084642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.084707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.084998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.085080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.085388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.085452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.085740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.085802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.086093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.086158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.086344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.086407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.086695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.086757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.087016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.087104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.087364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.087427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.087713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.087775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.087986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.088048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.088366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.088430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.088731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.246 [2024-07-25 13:52:52.088793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.246 qpair failed and we were unable to recover it. 00:23:55.246 [2024-07-25 13:52:52.089015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.089092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.089375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.089438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.089641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.089703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.090002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.090077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.090291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.090354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.090598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.090663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.090966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.091029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.091310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.091372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.091666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.091729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.091940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.092003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.092242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.092304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.092552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.092614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.092843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.092907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.093137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.093201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.093449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.093512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.093715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.093777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.094015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.094090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.094391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.094454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.094689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.094751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.094954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.095018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.095273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.095336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.095596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.095660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.095879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.095941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.096178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.096241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.096534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.096596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.096905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.096967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.097222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.097286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.097493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.097556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.097798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.097862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.098119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.098184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.098427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.098492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.098715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.098778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.099002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.099077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.247 [2024-07-25 13:52:52.099338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.247 [2024-07-25 13:52:52.099402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.247 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.099644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.099708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.099927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.099991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.100260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.100324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.100559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.100622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.100848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.100910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.101228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.101292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.101487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.101552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.101843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.101906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.102146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.102209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.102490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.102554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.102750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.102814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.103036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.103120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.103397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.103460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.103700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.103764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.104019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.104104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.104401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.104463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.104715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.104778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.104998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.105078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.105328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.105390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.105613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.105676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.105876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.105951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.106861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.106929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.107187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.107256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.107513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.107579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.107869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.107933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.108223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.108264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.108443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.108483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.108653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.108700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.108828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.108868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.109018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.109067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.109192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.109231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.109402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.109442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.109572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.109612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.248 qpair failed and we were unable to recover it. 00:23:55.248 [2024-07-25 13:52:52.109776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.248 [2024-07-25 13:52:52.109816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.109972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.110011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.110191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.110232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.110384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.110424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.110617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.110656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.110854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.110894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.111065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.111098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.111231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.111264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.111441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.111496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.111649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.111682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.111829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.111862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.111973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.112006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.112123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.112156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.112271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.112305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.112419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.112453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.112592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.112626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.112831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.112894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.113168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.113202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.113346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.113421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.113737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.113800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.114054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.114127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.114275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.114308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.114514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.114577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.114792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.114849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.115112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.115165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.115295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.115327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.115568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.115633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.115866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.115940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.116147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.116180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.116292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.116325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.116463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.116495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.116612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.116645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.116831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.116894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.117212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.117245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.249 [2024-07-25 13:52:52.117351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.249 [2024-07-25 13:52:52.117389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.249 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.117525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.117559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.117700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.117740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.117973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.118025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.118151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.118186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.118302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.118336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.118524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.118588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.118836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.118898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.119151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.119185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.119353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.119414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.119715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.119778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.120028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.120105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.120257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.120291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.120464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.120528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.120773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.120850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.121125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.121158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.121262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.121295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.121433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.121488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.121656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.121689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.121916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.121955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.122131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.122164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.122264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.122296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.122445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.122478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.122718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.122780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.123024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.123115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.123264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.123297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.123440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.123479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.123652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.123725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.123961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.250 [2024-07-25 13:52:52.124023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.250 qpair failed and we were unable to recover it. 00:23:55.250 [2024-07-25 13:52:52.124176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.124209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.124321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.124354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.124503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.124536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.124789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.124853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.125141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.125175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.125312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.125344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.125564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.125627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.125847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.125912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.126103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.126136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.126287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.126320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.126561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.126623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.126879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.126951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.127170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.127204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.127384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.127417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.127619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.127684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.127871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.127920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.128146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.128180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.128292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.128325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.128472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.128505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.128657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.128703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.128888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.128943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.129223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.129257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.129395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.129472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.129704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.129767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.130039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.130127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.130273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.130306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.130514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.130580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.130845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.130907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.131175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.131209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.131335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.131367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.131480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.131512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.131656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.131689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.131950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.251 [2024-07-25 13:52:52.131983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.251 qpair failed and we were unable to recover it. 00:23:55.251 [2024-07-25 13:52:52.132136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.132169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.132283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.132316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.132556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.132589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.132692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.132725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.132955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.132989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.133141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.133174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.133289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.133322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.133452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.133484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.133599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.133632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.133915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.133978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.134230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.134275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.134459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.134492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.134648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.134681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.134877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.134911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.135025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.135064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.135211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.135244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.135480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.135543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.135829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.135862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.136000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.136038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.136228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.136273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.136450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.136512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.136830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.136863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.136968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.137001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.137148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.137194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.137419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.137464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.137707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.137769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.138033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.138083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.138229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.138285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.138466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.138499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.138665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.138697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.138794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.138827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.138965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.138997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.139155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.139189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.139357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.252 [2024-07-25 13:52:52.139391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.252 qpair failed and we were unable to recover it. 00:23:55.252 [2024-07-25 13:52:52.139529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.139561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.139870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.139932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.140184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.140249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.140518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.140580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.140834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.140896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.141143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.141209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.141504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.141567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.141789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.141822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.141988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.142072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.142277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.142340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.142595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.142658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.142861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.142924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.143173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.143239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.143483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.143547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.143787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.143849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.144094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.144159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.144457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.144519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.144821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.144884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.145104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.145168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.145404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.145466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.145709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.145773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.146034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.146123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.146416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.146449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.146586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.146619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.146728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.146765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.146939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.146994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.147249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.147315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.147595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.147658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.147935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.147997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.148251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.148317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.148604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.148667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.148959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.149022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.149326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.149359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.149501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.253 [2024-07-25 13:52:52.149533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.253 qpair failed and we were unable to recover it. 00:23:55.253 [2024-07-25 13:52:52.149779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.149842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.150090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.150153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.150379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.150412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.150516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.150550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.150698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.150730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.150971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.151034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.151357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.151421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.151664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.151697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.151842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.151874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.152015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.152049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.152207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.152240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.152460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.152493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.152634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.152667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.152841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.152904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.153186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.153252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.153502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.153534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.153652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.153685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.153821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.153854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.154118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.154182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.154440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.154503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.154755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.154818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.155111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.155181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.155471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.155535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.155824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.155887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.156125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.156191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.156438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.156503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.156754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.156818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.157086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.157148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.157365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.157428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.157684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.157747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.157998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.158084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.158375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.158438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.158692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.158756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.158964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.159026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.254 [2024-07-25 13:52:52.159296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.254 [2024-07-25 13:52:52.159358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.254 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.159631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.159694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.159889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.159953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.160219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.160284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.160569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.160632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.160921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.160984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.161305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.161375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.161663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.161727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.162014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.162090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.162378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.162440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.162696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.162759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.162998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.163074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.163320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.163385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.163677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.163740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.163993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.164056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.164325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.164389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.164666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.164729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.164967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.165032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.165339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.165402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.165644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.165707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.165997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.166072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.166369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.166432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.166684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.166749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.167054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.167131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.167379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.167443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.167728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.167792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.168113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.168178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.168466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.168530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.168814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.168878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.169134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.169199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.169431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.169495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.169780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.169843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.170100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.170164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.255 qpair failed and we were unable to recover it. 00:23:55.255 [2024-07-25 13:52:52.170407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.255 [2024-07-25 13:52:52.170470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.170713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.170777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.171020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.171098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.171385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.171459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.171725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.171788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.172078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.172141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.172432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.172496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.172705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.172769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.173017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.173093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.173336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.173401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.173687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.173750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.174043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.174119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.174403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.174467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.174746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.174809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.175094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.175158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.175398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.175460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.175692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.175755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.176009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.176100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.176355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.176420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.176658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.176721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.176961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.177023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.177335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.177398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.177660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.177723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.178012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.178085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.178382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.178446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.178706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.178770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.178978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.179041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.179345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.179409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.179690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.179754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.256 [2024-07-25 13:52:52.179997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.256 [2024-07-25 13:52:52.180073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.256 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.180348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.180412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.180664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.180727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.180966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.181028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.181331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.181395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.181667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.181730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.182022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.182102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.182356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.182420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.182621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.182684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.182945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.183008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.183310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.183376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.183632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.183695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.183898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.183961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.184234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.184299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.184565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.184638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.184931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.184994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.185288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.185353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.185651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.185713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.186006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.186085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.186325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.186388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.186631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.186696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.186938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.187002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.187274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.187339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.187633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.187696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.187895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.187960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.188260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.188324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.188586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.188648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.188885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.188950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.189265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.189330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.189533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.189596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.189854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.189918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.190156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.190221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.190460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.190523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.190774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.190837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.191091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.257 [2024-07-25 13:52:52.191155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.257 qpair failed and we were unable to recover it. 00:23:55.257 [2024-07-25 13:52:52.191397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.191461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.191751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.191813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.192094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.192157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.192367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.192430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.192665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.192729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.192968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.193031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.193345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.193410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.193627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.193690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.193975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.194037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.194344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.194409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.194648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.194712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.195008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.195084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.195374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.195438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.195722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.195785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.195978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.196041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.196311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.196374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.196669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.196731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.197022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.197098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.197398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.197461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.197672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.197745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.198035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.198112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.198362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.198424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.198672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.198735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.198983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.199046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.199358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.199421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.199700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.199763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.200056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.200149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.200442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.200505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.200760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.200823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.201086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.201150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.201366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.201430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.201680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.201743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.201997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.202073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.202345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.258 [2024-07-25 13:52:52.202408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.258 qpair failed and we were unable to recover it. 00:23:55.258 [2024-07-25 13:52:52.202665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.202729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.203020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.203095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.203357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.203420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.203675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.203738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.203980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.204044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.204316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.204379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.204596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.204661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.204912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.204975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.205250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.205314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.205596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.205659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.205922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.205985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.206297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.206361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.206572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.206636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.206926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.206988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.207285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.207350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.207612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.207673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.207914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.207976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.208251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.208314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.208518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.208581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.208865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.208928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.209149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.209214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.209509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.209571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.209822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.209884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.210131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.210194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.210481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.210544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.210761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.210835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.211164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.211228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.211520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.211582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.211861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.211923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.212172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.212235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.212470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.212532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.212725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.212791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.213081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.213145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.213433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.259 [2024-07-25 13:52:52.213496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.259 qpair failed and we were unable to recover it. 00:23:55.259 [2024-07-25 13:52:52.213780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.213843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.214129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.214193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.214436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.214499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.214737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.214800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.215102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.215165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.215416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.215481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.215695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.215761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.216053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.216150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.216454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.216518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.216769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.216833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.217090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.217155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.217446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.217509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.217762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.217826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.218093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.218158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.218401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.218465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.218708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.218770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.219018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.219095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.219354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.219418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.219676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.219739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.219969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.220032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.220312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.220377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.220674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.220736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.221026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.221103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.221402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.221465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.221757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.221819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.222076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.222140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.222432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.222493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.222756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.222819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.223079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.223145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.223431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.223495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.223779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.223843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.224110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.224184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.224439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.224504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.224731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.260 [2024-07-25 13:52:52.224794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.260 qpair failed and we were unable to recover it. 00:23:55.260 [2024-07-25 13:52:52.225086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.225156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.225392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.225456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.225644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.225708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.225993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.226056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.226327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.226391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.226679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.226742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.227033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.227123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.227423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.227487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.227705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.227770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.228055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.228131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.228385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.228448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.228664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.228727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.229014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.229090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.229305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.229368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.229619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.229684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.229927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.229992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.230293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.230357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.230626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.230689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.230928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.230994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.231310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.231374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.231585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.231648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.231932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.231995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.232298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.232363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.232563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.232626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.232926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.232989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.233226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.233290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.233532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.233597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.233845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.233908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.234210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.261 [2024-07-25 13:52:52.234275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.261 qpair failed and we were unable to recover it. 00:23:55.261 [2024-07-25 13:52:52.234526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.234589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.234843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.234906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.235196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.235260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.235513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.235576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.235818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.235881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.236125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.236190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.236477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.236541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.236788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.236851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.237107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.237181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.237428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.237493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.237773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.237836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.238144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.238208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.238476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.238539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.238795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.238857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.239098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.239169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.239416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.239479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.239758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.239820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.240018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.240096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.240399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.240464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.240743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.240805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.241004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.241079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.241344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.241407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.241727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.241790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.242043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.242121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.242394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.242458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.242672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.242736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.242985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.243048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.243334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.243399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.243693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.243758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.244009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.244089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.244386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.244450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.244751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.244816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.245077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.245140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.245401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.245464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.262 qpair failed and we were unable to recover it. 00:23:55.262 [2024-07-25 13:52:52.245714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.262 [2024-07-25 13:52:52.245776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.263 qpair failed and we were unable to recover it. 00:23:55.263 [2024-07-25 13:52:52.246087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.263 [2024-07-25 13:52:52.246151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.263 qpair failed and we were unable to recover it. 00:23:55.263 [2024-07-25 13:52:52.246360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.263 [2024-07-25 13:52:52.246423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.263 qpair failed and we were unable to recover it. 00:23:55.263 [2024-07-25 13:52:52.246710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.263 [2024-07-25 13:52:52.246773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.263 qpair failed and we were unable to recover it. 00:23:55.263 [2024-07-25 13:52:52.246986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.247048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.247329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.247394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.247636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.247698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.247953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.248016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.248319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.248383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.248667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.248729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.248941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.249003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.249217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.249282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.249516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.249579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.249861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.249925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.250173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.250237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.250461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.250524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.250764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.250827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.251118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.251181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.251436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.251499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.251753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.251815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.252031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.252113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.252405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.252469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.252722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.252785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.253038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.253115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.253356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.253418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.253658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.253721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.253926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.253989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.254215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.254279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.254547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.254610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.254898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.254960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.255274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.255339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.255590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.255653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.540 qpair failed and we were unable to recover it. 00:23:55.540 [2024-07-25 13:52:52.255891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.540 [2024-07-25 13:52:52.255954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.256232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.256298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.256585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.256648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.256882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.256946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.257239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.257303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.257549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.257612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.257834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.257897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.258175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.258239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.258479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.258542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.258800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.258873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.259105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.259169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.259416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.259479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.259760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.259823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.260081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.260145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.260385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.260448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.260747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.260809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.261085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.261149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.261440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.261502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.261764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.261826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.262112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.262176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.262417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.262482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.262759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.262821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.263052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.263129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.263437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.263502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.263784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.263846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.264107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.264171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.264423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.264486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.264747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.264810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.264998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.265076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.265374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.265436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.265690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.265754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.265959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.266022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.266331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.266394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.266637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.541 [2024-07-25 13:52:52.266700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.541 qpair failed and we were unable to recover it. 00:23:55.541 [2024-07-25 13:52:52.266955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.267018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.267333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.267405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.267674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.267739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.268033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.268112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.268415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.268479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.268776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.268840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.269088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.269152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.269436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.269499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.269745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.269809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.270048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.270124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.270423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.270486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.270741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.270803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.271085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.271150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.271389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.271453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.271708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.271772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.272074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.272149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.272410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.272474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.272731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.272794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.273050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.273126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.273369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.273433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.273675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.273738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.274003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.274080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.274376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.274439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.274724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.274787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.275029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.275123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.275340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.275403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.275607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.275678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.275972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.276035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.276345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.276409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.276677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.276741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.277028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.542 [2024-07-25 13:52:52.277106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.542 qpair failed and we were unable to recover it. 00:23:55.542 [2024-07-25 13:52:52.277400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.277463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.277747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.277809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.278072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.278135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.278355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.278417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.278664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.278728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.278979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.279041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.279361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.279424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.279665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.279730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.280021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.280100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.280345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.280407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.280714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.280791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.281108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.281173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.281459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.281523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.281762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.281827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.282139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.282216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.282463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.282529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.282818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.282881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.283123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.283188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.283411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.283475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.283727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.283793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.284010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.284092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.284365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.284430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.284673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.284737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.284980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.285045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.285315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.285391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.285654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.285719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.285945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.286009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.286277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.286341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.286586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.286653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.286952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.287026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.543 [2024-07-25 13:52:52.287315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.543 [2024-07-25 13:52:52.287378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.543 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.287607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.287670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.287907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.287971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.288245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.288311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.288571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.288636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.288919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.288982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.289255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.289320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.289576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.289641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.289943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.290008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.290305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.290369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.290615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.290679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.290970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.291033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.291370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.291435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.291682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.291748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.292031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.292114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.292406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.292470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.292759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.292824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.293095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.293161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.293403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.293467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.293755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.293817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.294085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.294152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.294429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.294493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.294744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.294806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.295055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.295150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.295447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.295527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.295778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.295842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.296135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.296200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.296485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.296548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.296754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.296816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.297031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.297112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.297323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.297388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.297591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.544 [2024-07-25 13:52:52.297657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.544 qpair failed and we were unable to recover it. 00:23:55.544 [2024-07-25 13:52:52.297953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.298017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.298287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.298351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.298646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.298721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.298980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.299044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.299300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.299364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.299613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.299678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.299931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.299995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.300334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.300400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.300609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.300671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.300954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.301017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.301320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.301384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.301672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.301747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.302046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.302125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.302409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.302473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.302720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.302783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.303048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.303129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.303447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.303513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.303797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.303859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.304115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.304182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.304392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.304457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.304658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.304729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.304944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.305010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.305319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.305384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.305670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.305733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.306029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.306120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.306424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.306488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.306743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.306806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.545 qpair failed and we were unable to recover it. 00:23:55.545 [2024-07-25 13:52:52.307052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.545 [2024-07-25 13:52:52.307132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.307432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.307496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.307788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.307855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.308136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.308203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.308465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.308527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.308774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.308838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.309100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.309165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.309438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.309502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.309751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.309814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.310076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.310140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.310388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.310451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.310741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.310817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.311091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.311158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.311457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.311520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.311756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.311819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.312118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.312207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.312434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.312498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.312755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.312819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.313102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.313166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.313451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.313532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.313780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.313845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.314107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.314171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.314419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.314484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.314744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.314808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.315104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.315189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.315458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.315523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.315722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.315788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.316038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.316117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.316366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.316432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.316714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.316789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.317033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.317113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.317402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.317466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.317705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.546 [2024-07-25 13:52:52.317768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.546 qpair failed and we were unable to recover it. 00:23:55.546 [2024-07-25 13:52:52.318021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.318101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.318352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.318417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.318665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.318729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.319013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.319089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.319346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.319410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.319663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.319737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.319988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.320051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.320316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.320381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.320624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.320687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.320991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.321099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.321372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.321437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.321652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.321717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.322015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.322097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.322322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.322384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.322635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.322711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.323000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.323081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.323378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.323442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.323692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.323755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.324055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.324135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.324390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.324456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.324703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.324765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.325004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.325086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.325382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.325457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.325724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.325791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.326093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.326158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.326464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.326527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.326772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.326835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.327130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.327197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.327492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.327557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.327844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.327907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.328197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.328261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.328558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.328624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.328924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.547 [2024-07-25 13:52:52.328988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.547 qpair failed and we were unable to recover it. 00:23:55.547 [2024-07-25 13:52:52.329304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.329369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.329656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.329720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.329969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.330036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.330358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.330423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.330679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.330743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.330988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.331051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.331351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.331416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.331656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.331732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.331981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.332045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.332272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.332335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.332575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.332638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.332874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.332953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.333245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.333311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.333606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.333670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.333879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.333942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.334184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.334248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.334557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.334622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.334924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.334989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.335250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.335314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.335594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.335658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.335952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.336017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.336346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.336411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.336653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.336716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.336971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.337034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.337290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.337354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.337619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.337687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.337956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.338021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.338332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.338397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.338683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.338747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.338976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.339052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.339375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.339441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.339694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.339757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.548 [2024-07-25 13:52:52.340046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.548 [2024-07-25 13:52:52.340123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.548 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.340384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.340452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.340773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.340838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.341053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.341133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.341380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.341446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.341691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.341755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.341980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.342045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.342324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.342390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.342614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.342677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.342869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.342932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.343217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.343282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.343512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.343577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.343825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.343890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.344129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.344193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.344448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.344511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.344758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.344821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.345116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.345183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.345448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.345513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.345780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.345843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.346140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.346205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.346454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.346524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.346798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.346864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.347137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.347201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.347444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.347506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.347770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.347833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.348082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.348147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.348409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.348473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.348728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.348793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.349085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.349150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.349431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.349495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.349814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.349879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.350132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.350197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.350494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.350556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.549 [2024-07-25 13:52:52.350842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.549 [2024-07-25 13:52:52.350911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.549 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.351196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.351264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.351567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.351631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.351883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.351947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.352190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.352266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.352524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.352590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.352886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.352952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.353264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.353329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.353586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.353651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.353895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.353975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.354289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.354354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.354640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.354703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.354951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.355014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.355287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.355349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.355565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.355630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.355922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.355988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.356252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.356318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.356603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.356667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.356929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.356994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.357270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.357336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.357551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.357615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.357851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.357913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.358176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.358239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.358501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.358565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.358825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.358889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.359145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.359212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.359467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.359531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.550 qpair failed and we were unable to recover it. 00:23:55.550 [2024-07-25 13:52:52.359722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.550 [2024-07-25 13:52:52.359785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.360107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.360174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.360461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.360527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.360795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.360859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.361118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.361185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.361397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.361477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.361743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.361808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.362109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.362173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.362454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.362516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.362795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.362858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.363126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.363193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.363447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.363511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.363753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.363818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.364117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.364182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.364472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.364537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.364813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.364878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.365134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.365200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.365492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.365565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.365811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.365874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.366133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.366211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.366504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.366568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.366788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.366851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.367157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.367221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.367502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.367584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.367884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.367949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.368262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.368326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.368616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.368680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.368967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.369034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.369350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.369415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.369663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.369726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.369979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.370042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.370344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.370412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.370656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.551 [2024-07-25 13:52:52.370721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.551 qpair failed and we were unable to recover it. 00:23:55.551 [2024-07-25 13:52:52.370980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.371042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.371275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.371339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.371550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.371614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.371891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.371957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.372265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.372331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.372630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.372694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.372936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.372999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.373264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.373329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.373599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.373664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.373926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.373989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.374299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.374363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.374615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.374680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.374937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.375010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.375272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.375338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.375538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.375597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.375845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.375908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.376122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.376185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.376408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.376488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.376783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.376848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.377140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.377205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.377455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.377517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.377801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.377863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.378176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.378244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.378538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.378601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.378840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.378913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.379140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.379207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.379492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.379557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.379800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.379866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.380154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.380219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.380463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.380526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.380813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.380892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.381196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.381261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.381555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.381618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.381868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.552 [2024-07-25 13:52:52.381931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.552 qpair failed and we were unable to recover it. 00:23:55.552 [2024-07-25 13:52:52.382175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.382240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.382510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.382575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.382799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.382863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.383107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.383172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.383388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.383452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.383734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.383797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.384074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.384141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.384380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.384446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.384734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.384797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.385087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.385152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.385448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.385513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.385772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.385837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.386090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.386156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.386445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.386507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.386790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.386854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.387111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.387183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.387457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.387520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.387825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.387888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.388169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.388234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.388524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.388588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.388868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.388933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.389225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.389290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.389571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.389634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.389921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.389991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.390262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.390328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.390619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.390683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.390904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.390967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.391270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.391334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.391634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.391700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.392022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.392106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.392400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.392473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.553 [2024-07-25 13:52:52.392725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.553 [2024-07-25 13:52:52.392790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.553 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.393080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.393157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.393453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.393518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.393765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.393828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.394081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.394145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.394437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.394503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.394706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.394770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.395025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.395117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.395376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.395438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.395645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.395707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.395967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.396032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.396359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.396424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.396669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.396733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.396982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.397046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.397323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.397386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.397679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.397745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.397941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.398007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.398245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.398310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.398511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.398575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.398819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.398899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.399140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.399206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.399423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.399487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.399774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.399839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.400100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.400165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.400370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.400435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.400733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.400798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.401099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.401165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.401409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.401473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.401684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.401749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.402019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.402099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.402358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.554 [2024-07-25 13:52:52.402422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.554 qpair failed and we were unable to recover it. 00:23:55.554 [2024-07-25 13:52:52.402670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.402732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.403015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.403093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.403319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.403385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.403623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.403687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.403939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.404004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.404272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.404336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.404556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.404620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.404909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.404974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.405288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.405364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.405665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.405730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.405964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.406027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.406257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.406337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.406588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.406653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.406872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.406937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.407237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.407302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.407588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.407650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.407911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.407976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.408288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.408354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.408636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.408699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.408990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.409053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.409375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.409440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.409693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.409757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.410027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.410111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.410401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.410464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.410722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.410793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.411087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.411153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.411405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.411468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.411721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.411785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.412020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.412101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.412313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.412377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.555 [2024-07-25 13:52:52.412594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.555 [2024-07-25 13:52:52.412659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.555 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.412897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.412960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.413224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.413288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.413536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.413599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.413851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.413916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.414172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.414239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.414502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.414566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.414808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.414871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.415124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.415188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.415377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.415444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.415734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.415798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.416014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.416093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.416321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.416385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.416669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.416732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.416960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.417025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.417246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.417311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.417608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.417671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.417909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.417974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.418276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.418342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.418615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.418681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.418885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.418950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.419232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.419298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.419581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.419661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.419920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.419985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.420203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.420268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.420462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.420527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.420817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.420881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.556 qpair failed and we were unable to recover it. 00:23:55.556 [2024-07-25 13:52:52.421178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.556 [2024-07-25 13:52:52.421246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.421527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.421591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.421889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.421952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.422181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.422244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.422497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.422571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.422880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.422947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.423247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.423311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.423554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.423619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.423901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.423971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.424248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.424313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.424524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.424591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.424842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.424905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.425155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.425222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.425491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.425556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.425848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.425912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.426175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.426240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.426484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.426548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.426834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.426904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.427159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.427241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.427534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.427598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.427855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.427918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.428152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.428217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.428458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.428535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.428832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.428896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.429181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.429246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.429437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.429501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.429806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.429872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.430161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.430226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.430511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.430574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.430814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.430880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.431101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.431167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.557 [2024-07-25 13:52:52.431419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.557 [2024-07-25 13:52:52.431499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.557 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.431817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.431882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.432178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.432241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.432529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.432591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.432891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.432961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.433291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.433357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.433593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.433657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.433895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.433958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.434201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.434265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.434554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.434631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.434889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.434952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.435274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.435339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.435595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.435658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.435957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.436021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.436341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.436406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.436694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.436757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.436970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.437033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.437347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.437410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.437659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.437724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.437984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.438049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.438315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.438378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.438630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.438693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.438907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.438988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.439320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.439386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.439635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.439698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.439951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.440014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.440329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.440393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.440652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.440727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.440981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.441046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.441259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.441322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.441572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.441635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.441838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.441903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.442178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.558 [2024-07-25 13:52:52.442257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.558 qpair failed and we were unable to recover it. 00:23:55.558 [2024-07-25 13:52:52.442551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.442614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.442876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.442938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.443196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.443260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.443526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.443592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.443819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.443883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.444180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.444244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.444528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.444591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.444794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.444856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.445156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.445233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.445530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.445593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.445850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.445923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.446214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.446279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.446530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.446594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.446846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.446911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.447194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.447259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.447548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.447610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.447817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.447879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.448156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.448233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.448491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.448554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.448804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.448867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.449169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.449234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.449529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.449604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.449852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.449916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.450174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.450240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.450489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.450553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.450809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.450873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.451196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.451266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.451509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.451572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.451832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.451895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.452149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.452213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.452498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.452575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.452818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.452883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.453145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.453209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.559 [2024-07-25 13:52:52.453513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.559 [2024-07-25 13:52:52.453576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.559 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.453815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.453888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.454184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.454252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.454545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.454609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.454850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.454912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.455144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.455208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.455452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.455532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.455831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.455896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.456159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.456226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.456527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.456597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.456882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.456946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.457196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.457260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.457515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.457580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.457868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.457933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.458210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.458275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.458540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.458604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.458852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.458918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.459166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.459232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.459514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.459579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.459832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.459896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.460149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.460213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.460437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.460500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.460732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.460797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.461098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.461164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.461465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.461528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.461725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.461789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.462040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.462119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.462361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.462427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.462696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.462759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.463006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.463119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.463416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.463480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.463687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.463753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.464044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.464137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.464410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.560 [2024-07-25 13:52:52.464479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.560 qpair failed and we were unable to recover it. 00:23:55.560 [2024-07-25 13:52:52.464778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.464842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.465099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.465166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.465433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.465499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.465752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.465820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.466049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.466139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.466372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.466436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.466719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.466782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.467085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.467166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.467471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.467537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.467785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.467851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.468134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.468200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.468487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.468551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.468834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.468897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.469124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.469201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.469434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.469498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.469726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.469792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.470049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.470130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.470427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.470494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.470810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.470875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.471165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.471229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.471520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.471583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.471839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.471904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.472203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.472267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.472562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.472625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.472916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.472980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.473283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.473346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.473636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.473698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.561 [2024-07-25 13:52:52.473980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.561 [2024-07-25 13:52:52.474044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.561 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.474310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.474374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.474615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.474678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.474930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.474995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.475303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.475368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.475657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.475720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.475967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.476029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.476293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.476357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.476647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.476711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.476998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.477075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.477376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.477439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.477733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.477796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.478038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.478117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.478372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.478435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.478716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.478778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.479089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.479153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.479393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.479456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.479750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.479812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.480076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.480140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.480429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.480492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.480779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.480851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.481149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.481213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.481453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.481517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.481729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.481792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.482088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.482152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.482428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.482492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.482736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.482800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.483008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.483087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.483339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.483403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.483670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.483733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.484022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.484118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.484412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.562 [2024-07-25 13:52:52.484475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.562 qpair failed and we were unable to recover it. 00:23:55.562 [2024-07-25 13:52:52.484757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.484821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.485104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.485168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.485470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.485534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.485786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.485849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.486113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.486177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.486478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.486541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.486786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.486850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.487112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.487178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.487464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.487528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.487788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.487852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.488091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.488157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.488448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.488511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.488761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.488825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.489081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.489147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.489394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.489460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.489733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.489797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.490093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.490159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.490417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.490480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.490732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.490795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.491029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.491119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.491362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.491427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.491663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.491727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.491942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.492007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.492322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.492386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.492638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.492702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.492995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.493073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.493286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.493349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.493589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.493652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.493929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.494002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.494260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.494326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.494624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.494687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.494980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.495042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.495295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.495360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.563 [2024-07-25 13:52:52.495554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.563 [2024-07-25 13:52:52.495619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.563 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.495871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.495935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.496232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.496296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.496495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.496558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.496844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.496907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.497155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.497220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.497514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.497577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.497840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.497903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.498187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.498251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.498549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.498613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.498867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.498931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.499116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.499181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.499405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.499469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.499761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.499824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.500041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.500120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.500381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.500444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.500744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.500807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.501073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.501137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.501352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.501416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.501658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.501722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.502019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.502096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.502348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.502411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.502712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.502775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.503014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.503090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.503330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.503394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.503684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.503747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.504000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.504074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.504323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.504389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.504678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.504741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.505024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.505100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.505349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.505413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.505660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.505724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.505962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.506025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.564 qpair failed and we were unable to recover it. 00:23:55.564 [2024-07-25 13:52:52.506305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.564 [2024-07-25 13:52:52.506370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.506660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.506724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.506973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.507045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.507319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.507384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.507678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.507740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.507982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.508045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.508352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.508415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.508689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.508752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.509037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.509112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.509369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.509433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.509712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.509775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.510056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.510137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.510349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.510413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.510679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.510741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.511026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.511104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.511334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.511396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.511622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.511686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.511877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.511940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.512237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.512301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.512575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.512639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.512888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.512950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.513209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.513275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.513533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.513597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.513835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.513897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.514181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.514245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.565 [2024-07-25 13:52:52.514433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.565 [2024-07-25 13:52:52.514498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.565 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.514750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.514813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.515049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.515143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.515410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.515474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.515777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.515842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.516048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.516129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.516364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.516427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.516648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.516711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.516960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.517024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.517284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.517349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.517625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.517688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.517934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.518000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.518303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.518367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.518608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.518671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.518961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.519024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.519298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.519362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.519546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.519610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.519851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.519923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.520180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.520245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.520536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.520599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.520883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.520946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.521196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.521260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.521550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.521613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.521863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.521924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.522185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.522249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.522441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.522505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.522743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.522805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.566 qpair failed and we were unable to recover it. 00:23:55.566 [2024-07-25 13:52:52.523043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.566 [2024-07-25 13:52:52.523154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.523372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.523437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.523679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.523742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.524026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.524106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.524418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.524481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.524735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.524799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.525052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.525128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.525419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.525482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.525681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.525744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.525994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.526057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.526288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.526353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.526645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.526709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.526947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.527010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.527314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.527378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.527623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.527688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.527982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.528046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.528309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.528373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.528663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.528728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.528985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.529048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.529326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.529390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.529635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.567 [2024-07-25 13:52:52.529699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.567 qpair failed and we were unable to recover it. 00:23:55.567 [2024-07-25 13:52:52.529963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.530026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.530338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.530402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.530657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.530721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.530979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.531045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.531348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.531413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.531672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.531737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.531991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.532054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.532372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.532450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.532742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.532805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.533099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.533181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.533469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.533533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.533781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.533844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.534115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.534181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.534486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.534551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.534749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.534815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.535121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.535186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.535486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.535554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.535836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.535902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.536121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.536192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.536438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.536501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.536729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.536792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.537093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.537160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.537450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.537514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.537832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.537895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.538150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.538214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.538501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.538583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.538869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.538933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.539190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.539256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.539503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.539567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.539846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.539922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.540196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.540262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.540555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.568 [2024-07-25 13:52:52.540618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.568 qpair failed and we were unable to recover it. 00:23:55.568 [2024-07-25 13:52:52.540846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.540908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.541201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.541266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.541557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.541623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.541885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.541948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.542220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.542300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.542592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.542656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.542904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.542984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.543304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.543370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.543618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.543682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.543937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.544001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.544266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.544330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.544589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.544654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.544959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.545023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.545279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.545343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.545626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.545689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.545946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.546011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.546331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.546396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.546683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.546757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.547010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.547089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.547345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.547410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.547705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.547781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.548028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.548127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.548419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.548481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.548764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.548827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.549105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.549174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.549424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.549487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.549705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.549769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.550053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.550130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.550353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.550416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.550696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.550761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.551016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.551102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.551418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.551482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.551723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.551788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.552024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.552116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.569 qpair failed and we were unable to recover it. 00:23:55.569 [2024-07-25 13:52:52.552374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.569 [2024-07-25 13:52:52.552438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.552685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.552750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.553042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.553120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.553317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.553381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.553636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.553701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.553960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.554028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.554355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.554418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.554717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.554780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.555021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.555130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.555417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.555481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.555791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.555856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.556114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.556180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.556468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.556531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.556836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.556901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.557192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.557258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.557553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.557615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.557853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.557925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.558223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.558288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.570 [2024-07-25 13:52:52.558538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.570 [2024-07-25 13:52:52.558603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.570 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.558884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.558949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.559201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.559266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.559563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.559635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.559939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.560003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.560319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.560397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.560696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.560762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.560982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.561047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.561323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.561388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.561642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.561706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.562000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.562085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.562352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.562417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.562662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.562726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.563015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.563096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.563307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.851 [2024-07-25 13:52:52.563367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.851 qpair failed and we were unable to recover it. 00:23:55.851 [2024-07-25 13:52:52.563670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.563735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.564026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.564139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.564395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.564458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.564747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.564810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.565097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.565164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.565434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.565499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.565739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.565804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.566097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.566162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.566404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.566485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.566768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.566833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.567091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.567167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.567382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.567446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.567651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.567715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.567961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.568031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.568295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.568358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.568596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.568660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.568926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.568989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.569273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.569338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.569630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.569694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.569903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.569968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.570329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.570394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.570640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.570703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.570988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.571053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.571353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.571416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.571622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.571687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.571908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.571973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.572240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.572306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.572560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.572626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.572892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.572958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.573217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.573284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.573523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.573596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.573856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.573929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.574259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.574325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.574579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.852 [2024-07-25 13:52:52.574645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.852 qpair failed and we were unable to recover it. 00:23:55.852 [2024-07-25 13:52:52.574908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.574972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.575243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.575308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.575600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.575673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.575927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.575990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.576273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.576337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.576578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.576642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.576888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.576970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.577249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.577315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.577560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.577623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.577863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.577927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.578194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.578260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.578560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.578624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.578926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.578991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.579316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.579381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.579578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.579642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.579894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.579957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.580215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.580279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.580490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.580555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.580845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.580908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.581148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.581214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.581499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.581564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.581854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.581917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.582159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.582224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.582489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.582554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.582800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.582864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.583113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.583178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.583397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.583461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.583677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.583740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.583985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.584047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.584319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.584383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.584631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.584694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.584980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.585044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.585310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.585373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.585628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.853 [2024-07-25 13:52:52.585692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.853 qpair failed and we were unable to recover it. 00:23:55.853 [2024-07-25 13:52:52.585882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.585945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.586178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.586241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.586451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.586514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.586812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.586876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.587130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.587196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.587492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.587554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.587807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.587870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.588127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.588192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.588492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.588554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.588810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.588873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.589158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.589221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.589464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.589527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.589782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.589845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.590129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.590192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.590479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.590541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.590823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.590886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.591149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.591213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.591447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.591510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.591759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.591823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.592074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.592138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.592434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.592497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.592747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.592812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.593077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.593142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.593380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.593443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.593727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.593790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.594035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.594115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.594320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.594384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.594619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.594682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.594950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.595012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.595287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.595361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.595660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.595723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.595977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.596040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.596340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.596404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.596687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.596750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.854 qpair failed and we were unable to recover it. 00:23:55.854 [2024-07-25 13:52:52.597031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.854 [2024-07-25 13:52:52.597108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.597369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.597432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.597650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.597714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.597971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.598034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.598271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.598334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.598624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.598687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.598932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.598995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.599252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.599316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.599601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.599664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.599924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.599988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.600245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.600310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.600525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.600588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.600871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.600935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.601220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.601285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.601489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.601552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.601796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.601860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.602114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.602180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.602435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.602498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.602698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.602762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.603020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.603111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.603401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.603464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.603725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.603788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.604089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.604153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.604436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.604499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.604737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.604801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.605054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.605131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.605372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.605436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.605681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.605744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.605942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.606007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.606291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.606356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.606589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.606652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.606903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.606965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.607202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.607266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.607505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.607569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.607808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.855 [2024-07-25 13:52:52.607873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.855 qpair failed and we were unable to recover it. 00:23:55.855 [2024-07-25 13:52:52.608120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.608196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.608453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.608518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.608707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.608770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.609023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.609104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.609342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.609406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.609619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.609683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.609923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.609986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.610258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.610322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.610575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.610640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.610881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.610944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.611227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.611291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.611543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.611607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.611846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.611911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.612192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.612257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.612558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.612621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.612868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.612931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.613169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.613233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.613470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.613536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.613783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.613846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.614098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.614164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.614381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.614447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.614685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.614748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.615033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.615111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.615392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.615455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.615697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.615760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.615999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.616076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.616275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.616339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.856 [2024-07-25 13:52:52.616647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.856 [2024-07-25 13:52:52.616710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.856 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.616956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.617022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.617326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.617390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.617651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.617714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.617965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.618029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.618346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.618409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.618704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.618769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.619009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.619103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.619401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.619464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.619707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.619770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.619980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.620044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.620284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.620347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.620591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.620654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.620909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.620982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.621262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.621326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.621567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.621632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.621876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.621941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.622183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.622249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.622489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.622554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.622823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.622887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.623149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.623215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.623462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.623525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.623768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.623831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.624115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.624179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.624471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.624534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.624782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.624846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.625085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.625150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.625458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.625521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.625776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.625839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.626126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.626191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.626437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.626499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.626702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.626765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.626990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.627053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.627329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.627392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.627689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.857 [2024-07-25 13:52:52.627753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.857 qpair failed and we were unable to recover it. 00:23:55.857 [2024-07-25 13:52:52.627965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.628027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.628297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.628361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.628556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.628622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.628914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.628977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.629214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.629278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.629510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.629574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.629872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.629935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.630142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.630207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.630461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.630524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.630774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.630837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.631086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.631156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.631444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.631507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.631748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.631813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.632106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.632171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.632420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.632483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.632729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.632792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.632996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.633080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.633347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.633410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.633703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.633776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.633978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.634043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.634359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.634422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.634711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.634775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.635022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.635119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.635381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.635444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.635721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.635784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.636001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.636083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.636359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.636423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.636673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.636737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.636986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.637050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.637370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.637433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.637720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.637783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.638030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.638116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.638417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.638480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.638763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.638827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.858 [2024-07-25 13:52:52.639040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.858 [2024-07-25 13:52:52.639140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.858 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.639430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.639494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.639742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.639806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.640095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.640160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.640350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.640413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.640602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.640667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.640966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.641030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.641295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.641359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.641616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.641680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.641964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.642027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.642329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.642393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.642648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.642711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.642950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.643014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.643284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.643348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.643591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.643654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.643939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.644003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.644229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.644294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.644576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.644639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.644841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.644906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.645151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.645217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.645437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.645500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.645746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.645809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.646077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.646141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.646443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.646507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.646786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.646859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.647154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.647218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.647504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.647566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.647815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.647882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.648153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.648218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.648471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.648534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.648700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.648765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.648955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.649019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.649292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.649356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.649638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.649701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.649925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.859 [2024-07-25 13:52:52.649988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.859 qpair failed and we were unable to recover it. 00:23:55.859 [2024-07-25 13:52:52.650262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.650326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.650569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.650631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.650840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.650906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.651208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.651273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.651530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.651593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.651828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.651890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.652180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.652244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.652456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.652519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.652725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.652787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.653081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.653145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.653388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.653451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.653738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.653800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.654086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.654150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.654407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.654472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.654754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.654817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.655085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.655149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.655407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.655469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.655764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.655827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.656126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.656189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.656431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.656496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.656748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.656811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.657100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.657164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.657444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.657506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.657757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.657822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.658091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.658156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.658406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.658469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.658725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.658789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.659111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.659196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.659457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.659522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.659739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.659813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.660102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.660167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.660462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.660524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.660727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.660790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.661031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.860 [2024-07-25 13:52:52.661111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.860 qpair failed and we were unable to recover it. 00:23:55.860 [2024-07-25 13:52:52.661406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.661468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.661761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.661823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.662038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.662121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.662414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.662477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.662716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.662779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.663014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.663089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.663339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.663403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.663703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.663766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.664024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.664103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.664335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.664398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.664666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.664730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.664989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.665051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.665358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.665422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.665674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.665739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.665993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.666056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.666328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.666392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.666657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.666721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.666960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.667022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.667351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.667416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.667696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.667759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.667964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.668029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.668269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.668333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.668642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.668705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.668971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.669034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.669313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.669378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.669575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.669641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.669864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.669928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.670222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.670286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.670588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.670651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.861 [2024-07-25 13:52:52.670888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.861 [2024-07-25 13:52:52.670951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.861 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.671173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.671237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.671442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.671507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.671743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.671807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.672094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.672160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.672461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.672525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.672781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.672854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.673154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.673218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.673426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.673489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.673734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.673797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.674043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.674123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.674369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.674432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.674672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.674734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.675002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.675094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.675300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.675366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.675596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.675660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.675943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.676005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.676306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.676371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.676660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.676723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.676964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.677028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.677320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.677384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.677650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.677714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.677960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.678024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.678331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.678396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.678633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.678698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.678939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.679003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.679266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.679330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.679609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.679672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.679884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.679948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.680237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.680303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.680549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.680612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.680863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.680926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.681180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.681246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.681469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.681533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.681776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.862 [2024-07-25 13:52:52.681841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.862 qpair failed and we were unable to recover it. 00:23:55.862 [2024-07-25 13:52:52.682085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.682150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.682441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.682504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.682736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.682801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.682999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.683093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.683350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.683415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.683641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.683704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.683948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.684012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.684287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.684350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.684584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.684647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.684888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.684952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.685184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.685248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.685509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.685583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.685784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.685849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.686102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.686167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.686454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.686517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.686756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.686819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.687090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.687156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.687406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.687469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.687716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.687780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.688031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.688112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.688404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.688467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.688748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.688810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.689057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.689135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.689428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.689492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.689776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.689840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.690119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.690192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.690493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.690556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.690799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.690865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.691127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.691192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.691450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.691514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.691761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.691824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.692006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.692086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.692308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.692373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.692657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.692719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.863 qpair failed and we were unable to recover it. 00:23:55.863 [2024-07-25 13:52:52.692983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.863 [2024-07-25 13:52:52.693046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.693351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.693416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.693659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.693721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.694011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.694088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.694374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.694439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.694680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.694743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.695041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.695121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.695376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.695440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.695732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.695796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.696072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.696137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.696425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.696488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.696784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.696848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.697086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.697152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.697411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.697474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.697672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.697735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.697988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.698052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.698315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.698379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.698633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.698708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.698971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.699035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.699348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.699412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.699614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.699676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.699917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.699980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.700252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.700316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.700613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.700675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.700968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.701032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.701291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.701355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.701651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.701714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.701974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.702037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.702307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.702370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.702664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.702728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.702929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.702994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.703267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.703340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.703632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.703697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.703951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.704014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.864 [2024-07-25 13:52:52.704287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.864 [2024-07-25 13:52:52.704350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.864 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.704593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.704657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.704910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.704973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.705241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.705306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.705554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.705618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.705880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.705942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.706230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.706296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.706550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.706615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.706906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.706969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.707208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.707273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.707542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.707606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.707870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.707933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.708221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.708285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.708532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.708595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.708836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.708899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.709188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.709251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.709500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.709565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.709814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.709878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.710116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.710180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.710432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.710495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.710754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.710817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.711073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.711137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.711388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.711451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.711731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.711803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.712053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.712130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.712419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.712483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.712743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.712805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.713071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.713135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.713326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.713390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.713618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.713680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.713926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.713991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.714296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.714361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.714650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.714713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.715005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.715097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.715343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.865 [2024-07-25 13:52:52.715407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.865 qpair failed and we were unable to recover it. 00:23:55.865 [2024-07-25 13:52:52.715603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.715666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.715926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.715990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.716313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.716376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.716663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.716726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.717023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.717102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.717398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.717462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.717679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.717742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.717977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.718039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.718318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.718381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.718637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.718700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.718962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.719025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.719338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.719402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.719698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.719762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.720005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.720087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.720338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.720403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.720660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.720726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.721020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.721099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.721343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.721406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.721688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.721751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.721986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.722051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.722363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.722426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.722670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.722734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.723022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.723117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.723405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.723468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.723718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.723781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.724085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.724150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.724432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.724495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.724745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.724808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.725043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.725130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.725403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.866 [2024-07-25 13:52:52.725466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.866 qpair failed and we were unable to recover it. 00:23:55.866 [2024-07-25 13:52:52.725664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.725728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.725978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.726041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.726350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.726413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.726699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.726762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.727004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.727080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.727294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.727358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.727578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.727641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.727895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.727958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.728215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.728279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.728520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.728583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.728828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.728898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.729192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.729257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.729564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.729628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.729927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.729990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.730275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.730340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.730629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.730692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.730950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.731013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.731347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.731413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.731657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.731722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.731958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.732023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.732341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.732404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.732696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.732759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.733011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.733089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.733392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.733457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.733694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.733757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.734026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.734105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.734313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.734377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.734634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.734698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.734953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.735016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.735319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.735381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.735633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.735696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.735904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.735969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.736221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.736285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.736534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.736597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.867 [2024-07-25 13:52:52.736801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.867 [2024-07-25 13:52:52.736866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.867 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.737157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.737221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.737517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.737581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.737828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.737892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.738138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.738215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.738473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.738537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.738803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.738866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.739171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.739234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.739478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.739541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.739772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.739835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.740088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.740154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.740402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.740468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.740724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.740789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.741084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.741149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.741433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.741496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.741682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.741746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.742046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.742125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.742410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.742474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.742709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.742772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.743014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.743130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.743335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.743398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.743630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.743692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.743959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.744022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.744241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.744304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.744542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.744605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.744840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.744903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.745148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.745213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.745472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.745536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.745819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.745883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.746133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.746198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.746450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.746514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.746815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.746880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.747124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.747189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.747387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.747453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.747702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.868 [2024-07-25 13:52:52.747765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.868 qpair failed and we were unable to recover it. 00:23:55.868 [2024-07-25 13:52:52.747978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.748043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.748318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.748381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.748680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.748742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.748961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.749029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.749300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.749364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.749660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.749722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.749930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.749993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.750231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.750295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.750537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.750600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.750843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.750918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.751215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.751280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.751532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.751594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.751903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.751966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.752272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.752336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.752572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.752636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.752925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.752988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.753219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.753283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.753529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.753592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.753873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.753936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.754147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.754211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.754467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.754531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.754788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.754852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.755131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.755195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.755453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.755518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.755807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.755869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.756154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.756219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.756458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.756521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.756779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.756842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.757086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.757160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.757415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.757479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.757771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.757834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.758115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.758180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.758468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.758532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.758776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.758839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.869 qpair failed and we were unable to recover it. 00:23:55.869 [2024-07-25 13:52:52.759131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.869 [2024-07-25 13:52:52.759195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.759413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.759477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.759726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.759800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.760098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.760162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.760463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.760526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.760816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.760879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.761132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.761195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.761455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.761520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.761770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.761833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.762048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.762124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.762392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.762456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.762710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.762772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.763033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.763110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.763404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.763467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.763714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.763779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.764071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.764135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.764444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.764508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.764776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.764839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.765146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.765212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.765449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.765512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.765766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.765829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.766081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.766147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.766379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.766442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.766687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.766751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.767040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.767119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.767418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.767481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.767772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.767834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.768110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.768175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.768421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.768486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.768747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.768811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.769111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.769176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.769422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.769486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 665197 Killed "${NVMF_APP[@]}" "$@" 00:23:55.870 [2024-07-25 13:52:52.769727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.769790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 [2024-07-25 13:52:52.770053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.870 [2024-07-25 13:52:52.770134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.870 qpair failed and we were unable to recover it. 00:23:55.870 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:23:55.870 [2024-07-25 13:52:52.770336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.770401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.770650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:55.871 [2024-07-25 13:52:52.770715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.871 [2024-07-25 13:52:52.770962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.771026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:55.871 [2024-07-25 13:52:52.771344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.771408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.871 [2024-07-25 13:52:52.771612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.771674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.771940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.772003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.772328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.772393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.772646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.772710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.772955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.773017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.773195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.773229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.773380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.773412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.773533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.773567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.773687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.773752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.773994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.774057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.774280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.774345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.774611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.774674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.774929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.774992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.775268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.775335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.775555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.775619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.775926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.775990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.776211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.776245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.776358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.776392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.776613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.776677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.776919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.776982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.777185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.777219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.777338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.777371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.777481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.777515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.777626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.871 [2024-07-25 13:52:52.777660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.871 qpair failed and we were unable to recover it. 00:23:55.871 [2024-07-25 13:52:52.777871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.777935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.778178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.778212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=665647 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:55.872 [2024-07-25 13:52:52.778356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 665647 00:23:55.872 [2024-07-25 13:52:52.778388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.778543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.778577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 665647 ']' 00:23:55.872 [2024-07-25 13:52:52.778804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.778874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.872 [2024-07-25 13:52:52.779117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.779170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.872 [2024-07-25 13:52:52.779320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.779353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.872 [2024-07-25 13:52:52.779612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.779645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.779790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 13:52:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:55.872 [2024-07-25 13:52:52.779823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.780049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.780127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.780247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.780283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.780455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.780489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.780791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.780872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.781047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.781081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.781176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.781203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.781336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.781365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.781481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.781513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.781658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.781693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.781876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.781956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.782130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.782156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.782250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.782276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.782368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.782394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.782479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.782505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.782591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.782616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.782733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.782759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.782860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.782885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.782967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.782997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.783080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.783113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.783216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.783242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.783354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.872 [2024-07-25 13:52:52.783380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.872 qpair failed and we were unable to recover it. 00:23:55.872 [2024-07-25 13:52:52.783500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.783526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.783608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.783633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.783750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.783776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.783868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.783894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.783984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.784017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.784127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.784154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.784270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.784296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.784403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.784428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.784518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.784544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.784658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.784683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.784804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.784831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.784926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.784953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.785071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.785098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.785212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.785237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.785337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.785363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.785455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.785486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.785630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.785656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.785755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.785781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.785858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.785884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.785993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.786019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.786130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.786156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.786235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.786264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.786390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.786416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.786498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.786529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.786647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.786673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.786786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.786812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.786904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.786930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.787039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.787074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.787167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.787192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.787329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.787355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.787442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.787469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.787580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.787607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.787706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.787731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.787846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.787872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.873 [2024-07-25 13:52:52.787958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.873 [2024-07-25 13:52:52.787985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.873 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.788119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.788145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.788239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.788269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.788368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.788394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.788530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.788556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.788645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.788670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.788763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.788789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.788882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.788908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.789034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.789067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.789169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.789195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.789276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.789301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.789411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.789437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.789555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.789580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.789668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.789693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.789834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.789859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.789973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.789999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.790091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.790117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.790197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.790226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.790350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.790375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.790468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.790493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.790618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.790645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.790737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.790762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.790878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.790904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.791022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.791048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.791156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.791183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.791276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.791302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.792067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.792097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.792194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.792220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.792352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.792378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.792500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.792526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.792635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.792662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.792754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.792780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.792866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.792897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.792986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.793012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.793100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.874 [2024-07-25 13:52:52.793127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.874 qpair failed and we were unable to recover it. 00:23:55.874 [2024-07-25 13:52:52.793229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.793255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.793344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.793371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.793463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.793496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.793611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.793637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.793751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.793776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.793864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.793890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.793991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.794017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.794129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.794160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.794259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.794285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.794407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.794435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.794579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.794608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.794699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.794724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.794816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.794841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.794935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.794960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.795075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.795127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.795258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.795284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.795437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.795464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.795582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.795608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.795727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.795768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.795864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.795889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.795979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.796006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.796118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.796147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.796269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.796295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.796375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.796401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.796517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.796544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.796639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.796665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.796767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.796793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.796879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.796904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.796991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.797016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.797106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.797135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.797233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.797258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.797388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.797413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.797494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.797519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.797614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.875 [2024-07-25 13:52:52.797640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.875 qpair failed and we were unable to recover it. 00:23:55.875 [2024-07-25 13:52:52.797739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.797765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.797855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.797880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.797969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.797995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.798118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.798145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.798240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.798266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.798380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.798405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.798512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.798537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.798652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.798677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.798765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.798790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.798874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.798900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.799010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.799036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.799136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.799164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.799254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.799278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.799365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.799394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.799476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.799501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.799600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.799626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.799712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.799739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.799866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.799897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.799981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.800006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.800102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.800128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.800210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.800236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.800329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.800355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.800474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.800499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.800594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.800621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.800706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.800732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.800837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.800862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.800945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.800970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.801069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.801096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.801187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.801218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.801303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.876 [2024-07-25 13:52:52.801329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.876 qpair failed and we were unable to recover it. 00:23:55.876 [2024-07-25 13:52:52.801417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.801443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.801540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.801565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.801657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.801683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.801791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.801817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.801924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.801950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.802031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.802057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.802184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.802209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.802299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.802336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.802449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.802475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.802622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.802648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.802739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.802765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.802880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.802906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.802998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.803024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.803129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.803155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.803272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.803297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.803383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.803409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.803497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.803522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.803673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.803699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.803827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.803853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.803939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.803964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.804099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.804126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.804241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.804267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.804309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1199230 (9): Bad file descriptor 00:23:55.877 [2024-07-25 13:52:52.804460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.804500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.804640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.804666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.804756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.804781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.804920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.804945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.805083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.805109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.805187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.805211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.805296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.805319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.805407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.805431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.805517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.805540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.805622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.805647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.805735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.877 [2024-07-25 13:52:52.805760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.877 qpair failed and we were unable to recover it. 00:23:55.877 [2024-07-25 13:52:52.805875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.805898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.806020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.806048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.806180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.806206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.806330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.806360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.806469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.806494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.806609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.806634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.806785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.806811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.806898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.806925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.807019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.807043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.807169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.807209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.807302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.807328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.807458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.807485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.807603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.807628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.807713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.807739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.807830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.807855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.807949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.807974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.808085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.808113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.808216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.808242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.808332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.808357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.808497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.808523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.808637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.808661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.808778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.808803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.808892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.808918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.809003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.809028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.809126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.809152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.809237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.809262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.809346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.809371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.809484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.809511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.809597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.809622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.809735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.809761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.809853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.809878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.809961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.809988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.878 [2024-07-25 13:52:52.810724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.878 [2024-07-25 13:52:52.810754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.878 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.810863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.810890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.810978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.811003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.811096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.811122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.811217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.811244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.811363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.811389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.811481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.811505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.811601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.811627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.811720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.811745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.811861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.811887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.812002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.812027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.812126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.812158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.812250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.812276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.812359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.812385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.812491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.812516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.812600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.812625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.812730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.812755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.812866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.812892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.812991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.813016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.813109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.813134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.813216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.813240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.813347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.813371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.813473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.813498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.813592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.813617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.813711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.813736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.813859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.813884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.813999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.814025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.814122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.814148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.814223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.814249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.814334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.814360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.814457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.814482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.814573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.814598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.814683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.814708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.814818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.814844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.879 [2024-07-25 13:52:52.814932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.879 [2024-07-25 13:52:52.814956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.879 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.815071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.815096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.815191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.815216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.815301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.815326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.815433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.815471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.815590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.815617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.815739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.815766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.815858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.815884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.815971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.815996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.816079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.816106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.816209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.816236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.816317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.816343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.816434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.816459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.816550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.816577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.816662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.816688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.816773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.816798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.816908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.816934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.817016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.817045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.817164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.817202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.817321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.817347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.817430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.817455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.817536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.817560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.817678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.817704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.817793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.817817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.817926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.817951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.818032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.818056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.818153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.818178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.818268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.818293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.818376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.818401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.818485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.818509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.818625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.818650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.818777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.818805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.818890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.880 [2024-07-25 13:52:52.818916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.880 qpair failed and we were unable to recover it. 00:23:55.880 [2024-07-25 13:52:52.819030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.819055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.819149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.819175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.819287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.819312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.819453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.819480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.819607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.819635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.819719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.819743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.819841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.819866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.819956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.819980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.820075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.820101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.820197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.820221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.820302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.820328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.820404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.820434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.820515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.820539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.821249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.821278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.821382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.821408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.821513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.821539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.821627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.821650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.821742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.821768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.821883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.821907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.822051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.822082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.822194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.822219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.822310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.822335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.822419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.822443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.822551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.822576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.822672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.822696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.822781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.822806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.822917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.822941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.823028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.823053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.823156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.823181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.823262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.823286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.823396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.823420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.823530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.823554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.881 qpair failed and we were unable to recover it. 00:23:55.881 [2024-07-25 13:52:52.823642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.881 [2024-07-25 13:52:52.823666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.823760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.823784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.823867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.823891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.824002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.824026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.824125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.824163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.824276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.824315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.824440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.824466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.824588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.824614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.824727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.824752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.824840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.824866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.824947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.824972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.825080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.825107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.825224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.825249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.825336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.825361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.825456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.825482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.825567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.825593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.825712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.825739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.825823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.825848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.825943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.825969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.826047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.826079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.826171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.826196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.826286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.826310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.826426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.826451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.826558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.826582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.826666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.826691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.826777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.826801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.826916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.826940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.827026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.827051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.827144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.827169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.882 qpair failed and we were unable to recover it. 00:23:55.882 [2024-07-25 13:52:52.827250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.882 [2024-07-25 13:52:52.827275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.827364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.827389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.827475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.827500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.827616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.827640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.827788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.827817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.827902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.827926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.828008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.828032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.828134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.828159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.828271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.828296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.828381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.828405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.828496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.828521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.828601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.828626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.828765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.828790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.828873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.828897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.828981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.829006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.829093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.829118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.829199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.829224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.829315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.829340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.829490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.829515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.829597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.829621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.829715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.829744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.829831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.829856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.829965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.829991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.830093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.830120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.830234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.830260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.830375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.830401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.830539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.830564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.830701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.830727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.830812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.830837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.830960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.830985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.831097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.831123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.831219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.831249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.831332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.831358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.831465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.831490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.831584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.883 [2024-07-25 13:52:52.831609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.883 qpair failed and we were unable to recover it. 00:23:55.883 [2024-07-25 13:52:52.831702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.831732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.831814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.831839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.831952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.831978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.832056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.832087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.832168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.832194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.832271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.832296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.832388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.832419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.832511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.832537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.832559] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:55.884 [2024-07-25 13:52:52.832654] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.884 [2024-07-25 13:52:52.832676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.832702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.832797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.832821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.832909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.832934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.833052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.833082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.833172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.833196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.833284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.833315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.833412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.833437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.833552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.833576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.833662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.833688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.833809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.833834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.833924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.833951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.834040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.834071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.834157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.834182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.834277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.834301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.834422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.834449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.834565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.834591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.834681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.834707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.834816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.834840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.834934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.834960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.835049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.835088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.835197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.835222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.835308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.835334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.835421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.835447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.884 qpair failed and we were unable to recover it. 00:23:55.884 [2024-07-25 13:52:52.835537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.884 [2024-07-25 13:52:52.835563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.835652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.835677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.835783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.835813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.835906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.835932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.836044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.836082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.836179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.836203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.836281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.836307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.836387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.836412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.836522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.836548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.836636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.836661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.836767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.836808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.836933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.836961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.837047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.837080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.837166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.837192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.837280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.837308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.837452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.837478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.837559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.837586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.837670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.837696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.837811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.837838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.837959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.837986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.838083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.838108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.838192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.838217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.838313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.838338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.838482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.838506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.838619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.838644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.838733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.838758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.838883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.838914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.839013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.839052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.839155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.839182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.839268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.839293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.839409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.839436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.839516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.839545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.839662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.839689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.839779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.839804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.885 [2024-07-25 13:52:52.839883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.885 [2024-07-25 13:52:52.839909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.885 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.839995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.840020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.840136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.840162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.840240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.840265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.840379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.840405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.840490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.840515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.840603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.840627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.840715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.840740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.840848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.840873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.841011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.841036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.841127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.841155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.841253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.841280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.841368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.841395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.841507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.841532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.841624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.841655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.841743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.841769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.841890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.841916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.841999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.842025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.842114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.842141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.842237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.842264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.842352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.842378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.842468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.842494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.842588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.842615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.842727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.842754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.842864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.842895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.843004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.843030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.843127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.843154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.843241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.843267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.843351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.843377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.843471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.843498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.843573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.843599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.843683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.843710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.843805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.843832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.843914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.843939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.844065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.844092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.886 qpair failed and we were unable to recover it. 00:23:55.886 [2024-07-25 13:52:52.844182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.886 [2024-07-25 13:52:52.844208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.844304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.844331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.844448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.844475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.844569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.844594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.844706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.844732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.844822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.844848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.844958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.844983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.845079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.845106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.845193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.845219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.845329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.845356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.845435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.845460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.845575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.845601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.845716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.845742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.845869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.845908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.845999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.846026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.846123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.846148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.846236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.846263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.846352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.846379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.846460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.846487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.846628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.846654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.846815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.846855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.846951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.846983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.847094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.847124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.847325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.847353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.847466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.847492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.847581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.847607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.847685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.847711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.847823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.847850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.847956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.847995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.887 qpair failed and we were unable to recover it. 00:23:55.887 [2024-07-25 13:52:52.848098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.887 [2024-07-25 13:52:52.848130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.848226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.848252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.848339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.848365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.848450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.848476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.848561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.848587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.848680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.848707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.848832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.848870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.848961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.848987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.849082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.849108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.849191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.849216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.849298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.849323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.849441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.849465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.849556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.849581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.849698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.849722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.849813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.849842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.849931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.849959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.850671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.850702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.850849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.850876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.850957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.850983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.851106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.851132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.851220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.851246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.851360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.851385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.851503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.851528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.851640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.851666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.851759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.851784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.851924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.851950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.852034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.852068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.852158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.852189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.852272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.852298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.852408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.852434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.852519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.852546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.852664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.852690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.852804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.852830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.888 [2024-07-25 13:52:52.853621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.888 [2024-07-25 13:52:52.853659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.888 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.853757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.853785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.853876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.853903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.853994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.854021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.854121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.854148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.854241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.854267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.854354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.854380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.854528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.854554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.854673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.854699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.854786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.854811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.854899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.854925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.855017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.855044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.855153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.855181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.855266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.855292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.855414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.855441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.855536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.855563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.855673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.855699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.855812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.855838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.855918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.855944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.856028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.856055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.856144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.856171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.856260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.856286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.856371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.890 [2024-07-25 13:52:52.856396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.890 qpair failed and we were unable to recover it. 00:23:55.890 [2024-07-25 13:52:52.856517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.856550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.856649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.856679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.856768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.856793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.856875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.856901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.856991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.857018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.857118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.857144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.857230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.857257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.857376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.857403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.857500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.857526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.857624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.857649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.857740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.857765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.857888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.857931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.858040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.858084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.858180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.858207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.858291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.858317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.858436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.858462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.858577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.858602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.858687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.858712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.858793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.858820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.858895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.858921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.859038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.859070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.859163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.859189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.859285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.859311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.859398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.859425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.859548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.859574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.859671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.859697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.859777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.859803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.859944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.859970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.860074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.860113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.860205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.860230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.860320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.860346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.860436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.860460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.860541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.860567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.860689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.891 [2024-07-25 13:52:52.860714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.891 qpair failed and we were unable to recover it. 00:23:55.891 [2024-07-25 13:52:52.860800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.860826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.860912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.860939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.861031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.861070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.861173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.861200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.861309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.861348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.861442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.861469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.861559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.861586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.861704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.861729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.861841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.861868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.861988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.862014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.862125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.862151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.862236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.862262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.862356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.862382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.862491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.862517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.862626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.862652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.862794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.862822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.863731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.863762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.863860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.863892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.864007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.864033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.864135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.864162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.864260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.864286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.864407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.864433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.864528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.864554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.864644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.864671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.864779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.864805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.864914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.864939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.865668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.865697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.865822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.865849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.865940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.865966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.866064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.866090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.866182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.866208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.866300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.866326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.866442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.866468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.866585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.866611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.866726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.866751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.866880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.892 [2024-07-25 13:52:52.866919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.892 qpair failed and we were unable to recover it. 00:23:55.892 [2024-07-25 13:52:52.867013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.893 [2024-07-25 13:52:52.867040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.893 qpair failed and we were unable to recover it. 00:23:55.893 [2024-07-25 13:52:52.867136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.893 [2024-07-25 13:52:52.867162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.893 qpair failed and we were unable to recover it. 00:23:55.893 [2024-07-25 13:52:52.867253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.893 [2024-07-25 13:52:52.867278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.893 qpair failed and we were unable to recover it. 00:23:55.893 [2024-07-25 13:52:52.867388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.893 [2024-07-25 13:52:52.867414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.893 qpair failed and we were unable to recover it. 00:23:55.893 [2024-07-25 13:52:52.867511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.893 [2024-07-25 13:52:52.867536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.893 qpair failed and we were unable to recover it. 00:23:55.893 [2024-07-25 13:52:52.867615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.893 [2024-07-25 13:52:52.867640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.893 qpair failed and we were unable to recover it. 00:23:55.893 [2024-07-25 13:52:52.867754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.893 [2024-07-25 13:52:52.867780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:55.893 qpair failed and we were unable to recover it. 00:23:55.893 [2024-07-25 13:52:52.867869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.893 [2024-07-25 13:52:52.867895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.868018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.868049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.868148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.868173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.868255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.868281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.868395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.868420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.868501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.868526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.868619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.868643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.868734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.868758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.868866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.868890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.868999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.869023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.869118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.869144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.869231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.869256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.869348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.869387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.869511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.869541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.869630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.869656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.869743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.869770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.869891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.869918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.870013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.870039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.870137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.870164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.870278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.870305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.870391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.870417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.870507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.870533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.870616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.870642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.870762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.870787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.870904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.870930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.871023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.871084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.871186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.871215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.871296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.177 [2024-07-25 13:52:52.871322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.177 qpair failed and we were unable to recover it. 00:23:56.177 [2024-07-25 13:52:52.871412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.871440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.871531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.871557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.871667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.871692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.871801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.871826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.871918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.871944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.872023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.872049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.872145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.872171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.872249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.872274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.872388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.872417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.872506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.872534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.872646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.872684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.872778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.872805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.872937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.872962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.178 [2024-07-25 13:52:52.873043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.873084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.873172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.873199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.873293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.873318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.873401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.873428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.873516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.873543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.873666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.873692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.873787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.873812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.873926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.873953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.874052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.874097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.874192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.874219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.874336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.874361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.874467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.874492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.874585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.874609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.874685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.874710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.874804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.874829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.874954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.874978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.875096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.875122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.875238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.875264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.875341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.875365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.875446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.875471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.178 qpair failed and we were unable to recover it. 00:23:56.178 [2024-07-25 13:52:52.875592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.178 [2024-07-25 13:52:52.875617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.875717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.875747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.875835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.875863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.875949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.875976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.876071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.876098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.876180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.876206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.876292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.876317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.876439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.876466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.876552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.876578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.876676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.876701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.876784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.876809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.876909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.876948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.877092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.877120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.877214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.877240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.877320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.877345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.877423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.877449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.877531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.877557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.877641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.877667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.877783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.877809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.877930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.877956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.878036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.878075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.878183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.878208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.878290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.878317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.878404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.878430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.878543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.878568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.878654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.878680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.878822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.878847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.878932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.878959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.879088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.879115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.879202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.879228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.879316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.879342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.879423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.879448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.879562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.879587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.879705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.179 [2024-07-25 13:52:52.879730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.179 qpair failed and we were unable to recover it. 00:23:56.179 [2024-07-25 13:52:52.879825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.879854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.879948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.879978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.880069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.880096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.880189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.880216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.880314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.880340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.880430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.880456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.880567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.880592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.880680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.880708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.880791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.880817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.880927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.880953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.881036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.881070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.881170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.881197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.881284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.881312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.881430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.881457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.881542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.881568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.881652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.881679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.881789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.881815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.881902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.881928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.882010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.882035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.882141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.882167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.882254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.882279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.882399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.882424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.882507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.882533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.882608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.882633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.882752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.882780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.882908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.882946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.883071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.883100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.883190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.883217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.883294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.883320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.883407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.883434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.883519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.883545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.883667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.883693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.883784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.883809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.883895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.180 [2024-07-25 13:52:52.883920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.180 qpair failed and we were unable to recover it. 00:23:56.180 [2024-07-25 13:52:52.884034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.884067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.884176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.884202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.884293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.884318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.884435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.884461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.884575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.884601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.884691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.884717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.884838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.884866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.884987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.885013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.885110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.885137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.885225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.885252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.885379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.885405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.885514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.885540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.885638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.885665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.885755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.885781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.885910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.885948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.886080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.886110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.886205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.886231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.886310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.886336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.886431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.886457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.886546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.886577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.886670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.886697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.886819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.886848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.886935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.886961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.887050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.887087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.887207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.887233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.887324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.887350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.887441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.887467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.887577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.887604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.887686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.887711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.887820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.887846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.887935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.887960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.888045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.888077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.888170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.888196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.888345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.888373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.181 qpair failed and we were unable to recover it. 00:23:56.181 [2024-07-25 13:52:52.888484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.181 [2024-07-25 13:52:52.888509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.888640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.888666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.888756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.888782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.888895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.888921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.889008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.889034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.889140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.889166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.889286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.889312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.889399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.889424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.889517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.889543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.889623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.889649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.889748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.889787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.889904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.889931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.890051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.890084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.890187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.890215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.890298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.890324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.890423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.890449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.890562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.890588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.890673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.890699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.890813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.890839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.890952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.890978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.891063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.891089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.891183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.891210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.891292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.891318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.891397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.891423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.891566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.891592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.891708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.891737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.891864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.182 [2024-07-25 13:52:52.891902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.182 qpair failed and we were unable to recover it. 00:23:56.182 [2024-07-25 13:52:52.892011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.892049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.892163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.892190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.892275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.892300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.892383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.892408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.892488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.892513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.892597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.892621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.892705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.892730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.892836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.892861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.892940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.892964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.893053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.893090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.893198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.893224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.893310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.893336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.893424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.893449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.893530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.893555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.893639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.893664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.893744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.893769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.893862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.893901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.893985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.894013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.894112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.894139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.894219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.894245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.894355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.894380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.894466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.894491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.894574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.894602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.894717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.894743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.894855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.894880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.894963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.894989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.895088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.895128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.895228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.895254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.895368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.895394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.895515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.895541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.895633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.895661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.895778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.895805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.895898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.895924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.896016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.183 [2024-07-25 13:52:52.896041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.183 qpair failed and we were unable to recover it. 00:23:56.183 [2024-07-25 13:52:52.896136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.896162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.896278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.896303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.896386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.896411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.896491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.896516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.896605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.896632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.896722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.896753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.896870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.896896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.897036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.897067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.897162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.897188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.897279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.897307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.897407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.897434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.897523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.897548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.897660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.897684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.897773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.897798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.897910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.897934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.898023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.898048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.898160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.898186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.898308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.898334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.898423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.898449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.898526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.898551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.898641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.898679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.898786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.898825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.898914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.898940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.899035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.899065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.899159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.899182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.899278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.899302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.899392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.899419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.899616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.899642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.899731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.899758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.899867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.899892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.900007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.900033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.900169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.900199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.900287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.900313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.900395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.900422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.184 [2024-07-25 13:52:52.900538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.184 [2024-07-25 13:52:52.900564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.184 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.900678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.900706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.900799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.900837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.900967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.901006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.901100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.901129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.901230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.901256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.901334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.901360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.901477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.901502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.901615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.901641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.901742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.901771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.901888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.901916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.902003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.902029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.902150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.902177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.902378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.902404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.902515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.902540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.902654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.902679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.902768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.902795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.902949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.902987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.903086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.903125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.903218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.903243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.903357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.903383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.903501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.903527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.903615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.903640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.903760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.903786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.903899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.903936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.904068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.904107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.904205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.904231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.904339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.904365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.904472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.904498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.904617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.904644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.904786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.904814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.904947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.904986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.905087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.905114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.905210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.905236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.905324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.185 [2024-07-25 13:52:52.905348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.185 qpair failed and we were unable to recover it. 00:23:56.185 [2024-07-25 13:52:52.905490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.905515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.905632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.905656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.905742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.905766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.905862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.905890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.905972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.905997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.906126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.906154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.906278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.906305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.906389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.906415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.906553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.906579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.906672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.906698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.906831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.906870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.906973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.907000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.907090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.907116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.907231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.907257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.907347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.907372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.907458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.907484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.907584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.907610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.907701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.907726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.907829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.907854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.907964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.907988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.908047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.186 [2024-07-25 13:52:52.908109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.908134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.908222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.908246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.908361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.908386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.908465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.908490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.908575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.908600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.908684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.908709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.908908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.908937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.909079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.909129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.909229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.909256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.909343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.909373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.909491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.909516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.909601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.909627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.186 [2024-07-25 13:52:52.909769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.186 [2024-07-25 13:52:52.909796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.186 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.909897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.909924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.910013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.910039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.910143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.910169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.910248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.910273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.910356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.910382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.910483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.910509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.910646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.910671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.910756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.910782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.910873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.910899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.911012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.911039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.911172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.911202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.911316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.911342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.911460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.911486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.911607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.911633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.911746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.911774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.911858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.911885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.911988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.912026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.912136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.912163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.912284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.912310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.912392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.912418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.912528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.912552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.912667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.912695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.912794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.912821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.912908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.912936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.913036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.913067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.913191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.913218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.187 [2024-07-25 13:52:52.913332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.187 [2024-07-25 13:52:52.913358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.187 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.913512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.913538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.913631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.913659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.913744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.913770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.913858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.913884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.913968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.913994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.914083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.914110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.914222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.914260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.914375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.914402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.914490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.914514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.914592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.914622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.914707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.914733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.914840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.914879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.914964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.914991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.915129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.915155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.915299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.915325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.915444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.915470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.915582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.915608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.915729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.915754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.915839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.915865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.915950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.915979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.916104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.916131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.916237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.916262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.916356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.916381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.916475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.916499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.916616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.916642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.916729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.916753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.916836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.916863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.916995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.917033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.917173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.917201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.917292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.917319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.917432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.917458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.917556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.917581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.917668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.917694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.188 [2024-07-25 13:52:52.917818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.188 [2024-07-25 13:52:52.917844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.188 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.917929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.917955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.918072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.918099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.918206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.918235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.918319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.918343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.918430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.918455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.918529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.918553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.918672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.918701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.918787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.918814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.918896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.918922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.919001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.919026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.919124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.919151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.919255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.919295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.919449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.919477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.919567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.919594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.919696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.919722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.919815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.919842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.919941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.919966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.920083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.920121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.920263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.920291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.920415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.920441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.920549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.920575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.920661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.920687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.920783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.920809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.920951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.920977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.921067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.921094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.921181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.921207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.921289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.921315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.921436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.921462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.921606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.921632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.921723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.921750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.921836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.921862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.189 [2024-07-25 13:52:52.921946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.189 [2024-07-25 13:52:52.921971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.189 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.922066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.922092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.922193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.922219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.922338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.922364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.922488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.922513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.922656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.922681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.922796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.922822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.922923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.922962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.923066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.923095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.923190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.923216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.923332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.923357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.923443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.923474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.923562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.923588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.923709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.923736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.923840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.923878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.923979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.924006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.924129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.924155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.924257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.924282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.924364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.924389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.924476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.924500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.924613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.924638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.924722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.924746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.924838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.924863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.924956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.924983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.925074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.925100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.925185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.925211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.925318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.925344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.925437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.925463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.925570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.925595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.925680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.925707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.925798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.925837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.925924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.925952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.926068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.926094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.926176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.926202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.926316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.190 [2024-07-25 13:52:52.926342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.190 qpair failed and we were unable to recover it. 00:23:56.190 [2024-07-25 13:52:52.926453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.926478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.926567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.926593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.926713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.926737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.926832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.926864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.926954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.926980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.927074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.927110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.927203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.927230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.927312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.927338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.927420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.927450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.927567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.927593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.927678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.927706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.927798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.927828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.928039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.928071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.928216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.928242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.928330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.928357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.928476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.928502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.928590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.928616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.928732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.928760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.928850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.928877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.928960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.928987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.929102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.929127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.929213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.929238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.929319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.929343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.929425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.929450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.929533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.929557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.929653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.929681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.929772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.929798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.929891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.929916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.930038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.930069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.930183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.930209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.930321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.930347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.930426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.191 [2024-07-25 13:52:52.930452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.191 qpair failed and we were unable to recover it. 00:23:56.191 [2024-07-25 13:52:52.930531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.930557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.930655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.930692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.930787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.930813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.930907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.930935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.931024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.931051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.931156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.931183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.931299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.931325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.931405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.931430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.931541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.931567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.931681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.931707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.931827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.931854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.931944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.931974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.932081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.932116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.932231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.932257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.932336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.932360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.932471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.932495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.932618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.932644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.932738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.932764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.932857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.932885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.933002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.933028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.933123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.933149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.933238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.933263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.933346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.933372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.933515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.933542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.933655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.933681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.933812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.933838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.933922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.933947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.934074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.934104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.934220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.192 [2024-07-25 13:52:52.934246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.192 qpair failed and we were unable to recover it. 00:23:56.192 [2024-07-25 13:52:52.934332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.934357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.934447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.934473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.934578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.934604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.934804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.934829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.935021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.935046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.935146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.935173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.935264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.935289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.935370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.935396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.935499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.935537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.935641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.935667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.935798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.935826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.935947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.935972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.936091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.936118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.936210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.936236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.936325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.936351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.936439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.936464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.936549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.936574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.936692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.936719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.936838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.936864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.936977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.937003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.937088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.937114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.937198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.937224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.937342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.937373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.937489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.937516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.937710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.937736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.937832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.937858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.938003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.938031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.938147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.938173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.938265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.938291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.938401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.938427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.938539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.938564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.193 [2024-07-25 13:52:52.938677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.193 [2024-07-25 13:52:52.938704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.193 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.938788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.938815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.938929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.938955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.939040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.939071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.939193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.939220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.939318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.939343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.939434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.939459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.939548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.939575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.939694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.939720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.939806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.939832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.939941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.939967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.940080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.940119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.940252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.940290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.940412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.940438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.940527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.940551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.940661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.940685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.940884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.940911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.940990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.941016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.941129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.941160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.941255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.941281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.941388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.941413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.941496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.941521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.941675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.941703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.941819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.941847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.941939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.941965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.942054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.942086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.942182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.942207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.942298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.942324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.942433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.942458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.942541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.942567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.942652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.942678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.942763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.942789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.942884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.942922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.943018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.943055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.194 [2024-07-25 13:52:52.943182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.194 [2024-07-25 13:52:52.943209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.194 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.943299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.943325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.943438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.943465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.943548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.943574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.943665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.943692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.943810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.943838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.943936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.943961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.944074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.944100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.944185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.944210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.944288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.944314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.944391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.944416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.944506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.944534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.944646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.944671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.944760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.944786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.944873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.944899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.945013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.945038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.945162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.945189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.945276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.945303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.945385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.945411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.945515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.945540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.945619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.945643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.945736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.945766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.945849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.945876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.946014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.946040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.946135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.946165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.946251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.946277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.946386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.946412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.946526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.946552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.946648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.946674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.946758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.946784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.946867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.946891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.947004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.947030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.947119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.947144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.947231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.947255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.195 [2024-07-25 13:52:52.947394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.195 [2024-07-25 13:52:52.947418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.195 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.947494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.947518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.947596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.947620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.947750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.947775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.947893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.947917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.948003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.948028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.948154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.948182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.948276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.948305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.948393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.948419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.948503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.948529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.948644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.948669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.948755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.948780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.948906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.948932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.949026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.949054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.949189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.949227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.949355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.949384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.949508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.949534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.949628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.949660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.949749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.949775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.949889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.949916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.950000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.950025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.950115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.950142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.950234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.950260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.950397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.950423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.950511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.950536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.950653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.950679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.950765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.950790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.950903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.950929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.951020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.951045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.951134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.951159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.951267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.951291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.951435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.951460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.951543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.951567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.951670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.951695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.196 qpair failed and we were unable to recover it. 00:23:56.196 [2024-07-25 13:52:52.951778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.196 [2024-07-25 13:52:52.951802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.951905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.951944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.952038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.952075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.952214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.952252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.952402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.952430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.952525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.952553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.952709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.952736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.952849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.952875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.953004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.953030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.953150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.953176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.953286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.953320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.953429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.953456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.953542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.953568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.953658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.953683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.953804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.953830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.953941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.953967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.954067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.954097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.954193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.954217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.954323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.954349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.954434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.954458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.954575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.954601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.954678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.954702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.954787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.954812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.954916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.954954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.955055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.955089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.955214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.955240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.955353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.955378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.955487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.955513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.955632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.955659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.955779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.955805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.197 qpair failed and we were unable to recover it. 00:23:56.197 [2024-07-25 13:52:52.955917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.197 [2024-07-25 13:52:52.955956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.956080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.956107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.956225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.956250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.956368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.956394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.956481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.956506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.956592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.956618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.956702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.956727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.956814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.956842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.956933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.956959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.957097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.957123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.957269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.957294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.957411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.957436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.957552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.957577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.957669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.957697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.957815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.957842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.957939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.957964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.958103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.958130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.958218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.958243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.958327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.958352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.958435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.958460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.958578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.958609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.958699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.958724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.958807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.958832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.958916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.958943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.959031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.959057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.959148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.959174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.959297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.959323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.959442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.959468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.959555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.959584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.959681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.959709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.959819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.959844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.959953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.959979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.198 [2024-07-25 13:52:52.960094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.198 [2024-07-25 13:52:52.960119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.198 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.960234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.960260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.960379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.960405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.960482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.960507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.960618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.960643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.960734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.960762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.960866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.960892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.960968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.960994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.961078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.961104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.961222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.961248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.961333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.961359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.961443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.961468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.961586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.961612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.961716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.961755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.961840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.961868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.961962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.962001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.962092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.962119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.962204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.962231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.962339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.962365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.962450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.962476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.962566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.962592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.962743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.962769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.962912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.962938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.963070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.963097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.963211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.963237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.963352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.963378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.963491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.963517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.963598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.963623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.963703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.963735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.963830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.963868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.964041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.964088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.964187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.964214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.964326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.964351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.964444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.964470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.199 [2024-07-25 13:52:52.964586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.199 [2024-07-25 13:52:52.964613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.199 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.964729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.964756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.964894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.964922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.965004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.965031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.965154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.965180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.965274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.965299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.965391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.965416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.965498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.965524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.965642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.965669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.965786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.965813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.965897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.965922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.966014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.966039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.966130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.966156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.966276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.966304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.966395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.966421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.966535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.966560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.966675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.966701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.966819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.966843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.966965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.967004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.967097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.967125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.967227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.967265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.967358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.967391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.967504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.967530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.967618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.967644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.967776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.967802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.967891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.967917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.968044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.968095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.968194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.968222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.968338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.968364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.968453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.968479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.968592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.968618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.968702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.968728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.968808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.968834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.968945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.968970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.200 [2024-07-25 13:52:52.969076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.200 [2024-07-25 13:52:52.969114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.200 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.969205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.969233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.969320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.969346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.969478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.969503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.969614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.969640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.969733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.969759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.969856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.969882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.969990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.970016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.970133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.970160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.970241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.970266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.970382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.970409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.970528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.970554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.970647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.970673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.970794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.970823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.970957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.970996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.971119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.971147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.971235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.971261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.971347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.971373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.971464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.971490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.971605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.971631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.971706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.971731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.971812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.971837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.971925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.971951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.972044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.972090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.972188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.972214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.972305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.972331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.972450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.972474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.972562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.972592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.972675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.972703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.972794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.972821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.972948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.972978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.973071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.973098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.973216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.973241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.973348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.201 [2024-07-25 13:52:52.973374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.201 qpair failed and we were unable to recover it. 00:23:56.201 [2024-07-25 13:52:52.973459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.973484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.973596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.973624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.973719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.973746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.973863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.973890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.973976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.974001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.974088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.974114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.974202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.974226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.974345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.974371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.974450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.974474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.974555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.974582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.974673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.974700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.974815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.974843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.974943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.974969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.975111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.975139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.975237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.975263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.975352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.975379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.975461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.975485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.975575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.975600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.975710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.975734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.975851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.975876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.975962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.975992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.976081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.976107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.976220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.976245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.976368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.976393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.976514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.976539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.976651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.976676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.976877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.976904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.977023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.977049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.977145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.977174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.977276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.977302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.977419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.977445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.977561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.977587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.977669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.977696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.977801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.202 [2024-07-25 13:52:52.977838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.202 qpair failed and we were unable to recover it. 00:23:56.202 [2024-07-25 13:52:52.977967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.977994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.978127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.978154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.978285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.978312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.978455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.978481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.978568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.978595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.978678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.978705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.978820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.978859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.978981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.979008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.979105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.979132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.979250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.979277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.979374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.979400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.979547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.979573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.979695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.979721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.979823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.979851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.979969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.979995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.980087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.980114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.980228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.980255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.980376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.980404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.980491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.980518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.980631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.980656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.980744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.980770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.980865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.980892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.980981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.981008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.981095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.981120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.981230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.981255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.981342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.981369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.981455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.981485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.981593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.981618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.203 [2024-07-25 13:52:52.981736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.203 [2024-07-25 13:52:52.981762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.203 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.981878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.981905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.981990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.982017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.982151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.982178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.982266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.982292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.982486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.982512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.982606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.982632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.982746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.982772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.982858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.982883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.982969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.982997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.983153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.983191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.983314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.983341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.983480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.983506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.983618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.983644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.983737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.983763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.983856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.983882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.983979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.984018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.984140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.984179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.984276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.984304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.984423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.984449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.984563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.984590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.984677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.984703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.984792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.984819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.984947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.984985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.985088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.985116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.985257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.985289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.985404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.985430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.985577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.985603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.985711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.985737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.985852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.985878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.985974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.986002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.986138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.986166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.986288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.986314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.986425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.204 [2024-07-25 13:52:52.986451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.204 qpair failed and we were unable to recover it. 00:23:56.204 [2024-07-25 13:52:52.986565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.986590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.986680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.986712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.986804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.986830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.986935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.986973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.987073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.987100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.987197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.987222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.987311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.987336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.987421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.987446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.987538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.987563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.987654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.987681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.987764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.987791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.987903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.987929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.988012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.988037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.988133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.988159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.988247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.988273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.988365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.988390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.988486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.988511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.988610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.988649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.988740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.988767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.988853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.988878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.988965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.988990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.989077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.989103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.989187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.989212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.989303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.989328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.989467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.989492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.989573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.989597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.989724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.989752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.989841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.989871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.989970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.989997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.990111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.990139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.990228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.990255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.990335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.990366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.990459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.990485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.990574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.990601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.205 qpair failed and we were unable to recover it. 00:23:56.205 [2024-07-25 13:52:52.990719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.205 [2024-07-25 13:52:52.990745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.990830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.990856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.990950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.990975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.991103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.991142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.991236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.991263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.991375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.991401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.991485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.991510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.991625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.991652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.991739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.991765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.991876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.991903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.992043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.992074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.992163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.992188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.992282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.992307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.992420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.992445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.992526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.992552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.992648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.992675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.992872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.992898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.993038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.993072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.993159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.993185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.993272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.993297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.993386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.993412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.993529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.993556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.993675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.993703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.993798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.993826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.993921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.993952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.994069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.994096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.994212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.994239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.994326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.994352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.994445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.994471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.994615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.994642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.994727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.994754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.994869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.994895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.994979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.995006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.995121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.995147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.206 qpair failed and we were unable to recover it. 00:23:56.206 [2024-07-25 13:52:52.995263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.206 [2024-07-25 13:52:52.995289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.995384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.995412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.995509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.995536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.995657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.995683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.995809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.995835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.995959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.995985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.996097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.996123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.996215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.996241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.996342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.996380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.996476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.996503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.996613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.996639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.996786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.996812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.996921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.996947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.997025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.997050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.997171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.997197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.997299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.997337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.997430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.997458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.997578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.997604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.997687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.997713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.997828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.997854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.997977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.998003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.998087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.998114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.998205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.998232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.998308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.998334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.998427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.998452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.998565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.998590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.998673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.998698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.998786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.998811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.998918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.998943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.999021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.999046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.999149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.999179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.999271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.999297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.999409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.999434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.999551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.207 [2024-07-25 13:52:52.999576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.207 qpair failed and we were unable to recover it. 00:23:56.207 [2024-07-25 13:52:52.999668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:52.999696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:52.999781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:52.999807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:52.999908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:52.999935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.000013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.000039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.000158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.000184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.000268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.000293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.000409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.000436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.000518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.000543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.000635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.000660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.000773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.000798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.000912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.000937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.001033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.001077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.001206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.001233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.001359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.001385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.001499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.001526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.001616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.001641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.001765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.001805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.001927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.001954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.002093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.002123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.002242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.002268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.002363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.002388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.002498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.002523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.002647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.002674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.002767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.002800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.002918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.002943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.003036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.003067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.003152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.003178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.003260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.003285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.003370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.208 [2024-07-25 13:52:53.003395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.208 qpair failed and we were unable to recover it. 00:23:56.208 [2024-07-25 13:52:53.003535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.003562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.003673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.003698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.003806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.003831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.003949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.003975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.004066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.004093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.004210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.004236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.004315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.004341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.004486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.004513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.004660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.004685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.004775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.004802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.004890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.004918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.005002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.005029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.005144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.005171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.005257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.005283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.005392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.005419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.005493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.005519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.005604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.005631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.005747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.005773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.005858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.005885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.006001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.006026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.006164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.006193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.006293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.006319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.006405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.006431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.006544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.006570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.006652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.006677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.006761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.006788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.006880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.006906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.006989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.007015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.007136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.007162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.007291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.007317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.007409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.007435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.007551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.007577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.007675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.007702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.209 [2024-07-25 13:52:53.007842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.209 [2024-07-25 13:52:53.007867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.209 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.007986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.008014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.008114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.008140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.008256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.008282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.008373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.008399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.008483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.008508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.008590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.008615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.008700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.008725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.008809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.008836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.008934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.008972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.009064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.009092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.009187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.009212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.009298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.009324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.009436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.009460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.009584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.009611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.009701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.009729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.009845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.009872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.009961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.009987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.010099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.010126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.010219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.010248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.010397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.010423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.010505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.010531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.010623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.010650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.010734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.010759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.010884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.010922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.011008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.011035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.011159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.011185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.011265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.011291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.011432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.011463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.011578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.011604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.011686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.011712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.011828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.011854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.011939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.011965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.012123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.012150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.012257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.210 [2024-07-25 13:52:53.012283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.210 qpair failed and we were unable to recover it. 00:23:56.210 [2024-07-25 13:52:53.012361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.012386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.012465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.012490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.012578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.012603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.012716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.012742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.012825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.012852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.012948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.012987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.013095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.013134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.013258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.013285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.013374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.013401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.013490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.013515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.013636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.013663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.013804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.013830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.013942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.013967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.014049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.014082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.014195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.014220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.014333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.014358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.014440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.014467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.014546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.014571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.014678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.014703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.014795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.014821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.014956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.014995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.015120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.015149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.015235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.015264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.015352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.015377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.015464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.015490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.015602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.015627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.015712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.015738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.015817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.015843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.015960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.015985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.016074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.016101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.016191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.016216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.016332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.016357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.016448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.016473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.016592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.211 [2024-07-25 13:52:53.016622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.211 qpair failed and we were unable to recover it. 00:23:56.211 [2024-07-25 13:52:53.016775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.016804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.016937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.016975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.017104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.017131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.017220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.017246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.017327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.017353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.017468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.017493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.017588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.017613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.017702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.017730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.017816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.017843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.017957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.017983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.018069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.018095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.018180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.018207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.018293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.018320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.018411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.018438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.018552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.018591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.018707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.018736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.018826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.018851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.018961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.018987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.019079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.019105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.019183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.019208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.019365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.019392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.019506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.019532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.019615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.019641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.019779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.019805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.019950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.019976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.020096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.020125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.020241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.020269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.020390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.020415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.020533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.020559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.020651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.020677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.020763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.020789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.020876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.020902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.021011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.021037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.021157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.021183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.212 qpair failed and we were unable to recover it. 00:23:56.212 [2024-07-25 13:52:53.021324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.212 [2024-07-25 13:52:53.021350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.021441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.021467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.021587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.021613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.021704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.021731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.021849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.021877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.021979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.022022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.022152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.022180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.022263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.022288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.022367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.022392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.022524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.022550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.022666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.022693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.022779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.022804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.022946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.022972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.023082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.023108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.023191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.023216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.023327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.023353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.023433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.023458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.023569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.023594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.023713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.023739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.023859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.023885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.023998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.024023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.024123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.024149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.024262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.024287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.024370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.024395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.024502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.024527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.024656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.024696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.024795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.024822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.024908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.024935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.025020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.025047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.213 qpair failed and we were unable to recover it. 00:23:56.213 [2024-07-25 13:52:53.025204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.213 [2024-07-25 13:52:53.025232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.025351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.025377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.025474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.025499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.025584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.025615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.025705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.025730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.025868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.025893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.026002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.026027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.026138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.026177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.026271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.026300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.026382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.026408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.026490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.026517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.026599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.026624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.026704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.026728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.026809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.026835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.026913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.026941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.027025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.027053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.027156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.027182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.027276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.027303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.027394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.027420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.027509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.027537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.027629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.027654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.027741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.027769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.027850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.027876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.027956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.027950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.214 [2024-07-25 13:52:53.027981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.214 [2024-07-25 13:52:53.027986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.028002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.214 [2024-07-25 13:52:53.028015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.214 [2024-07-25 13:52:53.028026] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.214 [2024-07-25 13:52:53.028064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.028089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.028097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:56.214 [2024-07-25 13:52:53.028180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.028207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.214 [2024-07-25 13:52:53.028155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.028158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:56.214 [2024-07-25 13:52:53.028127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:56.214 [2024-07-25 13:52:53.028322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.028349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.028455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.028481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.028565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.028589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.028703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.028728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.028814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.028839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.214 [2024-07-25 13:52:53.028924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.214 [2024-07-25 13:52:53.028949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.214 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.029069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.029096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.029184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.029210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.029302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.029328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.029440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.029466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.029575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.029600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.029686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.029712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.029796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.029823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.029915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.029943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.030047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.030092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.030174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.030201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.030283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.030309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.030399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.030424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.030505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.030530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.030644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.030671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.030750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.030776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.030856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.030882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.030992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.031018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.031114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.031142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.031239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.031265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.031364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.031390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.031470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.031496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.031579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.031605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.031705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.031732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.031812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.031838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.031922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.031949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.032038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.032069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.032152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.032177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.032263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.032289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.032377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.032404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.032486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.032513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.032603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.032631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.032748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.032775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.032861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.032888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.032978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.215 [2024-07-25 13:52:53.033003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.215 qpair failed and we were unable to recover it. 00:23:56.215 [2024-07-25 13:52:53.033122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.033150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.033248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.033274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.033363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.033390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.033474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.033501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.033589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.033615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.033698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.033723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.033808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.033833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.033913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.033938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.034018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.034043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.034126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.034151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.034281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.034306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.034430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.034455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.034559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.034586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.034670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.034697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.034784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.034815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.034926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.034951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.035041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.035072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.035154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.035179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.035264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.035290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.035375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.035400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.035509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.035535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.035636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.035663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.035767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.035806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.035896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.035922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.036007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.036035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.036133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.036160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.036247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.036274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.036386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.036411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.036498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.036524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.036638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.036663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.036747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.036773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.036856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.036881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.036962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.036987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.037073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.037100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.216 qpair failed and we were unable to recover it. 00:23:56.216 [2024-07-25 13:52:53.037217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.216 [2024-07-25 13:52:53.037242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.037326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.037352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.037463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.037489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.037604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.037630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.037704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.037729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.037823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.037863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.038012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.038040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.038126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.038158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.038252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.038279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.038356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.038382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.038467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.038493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.038579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.038607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.038715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.038754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.038883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.038922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.039020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.039046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.039151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.039178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.039263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.039291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.039375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.039402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.039511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.039537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.039642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.039668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.039787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.039813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.039903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.039929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.040023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.040049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.040155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.040182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.040297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.040322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.040403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.040428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.040516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.040542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.040627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.040653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.040741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.040767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.040875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.040900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.041042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.041076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.041160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.041186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.041267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.041293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.041377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.041403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.217 [2024-07-25 13:52:53.041521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.217 [2024-07-25 13:52:53.041559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.217 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.041685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.041711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.041798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.041823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.041900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.041925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.042035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.042066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.042149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.042174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.042258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.042284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.042389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.042414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.042488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.042513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.042589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.042614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.042698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.042724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.042812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.042840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.042926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.042952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.043064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.043103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.043214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.043240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.043335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.043361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.043474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.043499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.043580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.043607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.043688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.043716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.043829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.043855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.043933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.043959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.044085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.044112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.044201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.044228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.044377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.044403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.044488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.044514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.044596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.044623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.044702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.044728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.044808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.044835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.044919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.044944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.045086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.045114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.218 [2024-07-25 13:52:53.045206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.218 [2024-07-25 13:52:53.045231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.218 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.045326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.045353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.045437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.045463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.045544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.045572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.045684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.045723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.045874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.045900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.045981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.046006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.046122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.046147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.046259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.046285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.046372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.046399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.046490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.046520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.046642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.046671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.046789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.046816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.046900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.046926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.047014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.047039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.047139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.047167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.047258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.047285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.047365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.047391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.047483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.047509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.047595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.047621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.047714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.047740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.047822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.047849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.047963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.047989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.048076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.048102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.048190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.048215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.048303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.048328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.048407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.048432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.048517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.048543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.048624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.048649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.048734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.048763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.048852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.048877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.048991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.049017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.049110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.049136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.049277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.049304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.049392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.049417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.049502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.049528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.049639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.049665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.049762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.219 [2024-07-25 13:52:53.049791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.219 qpair failed and we were unable to recover it. 00:23:56.219 [2024-07-25 13:52:53.049874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.049900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.049986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.050012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.050105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.050132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.050208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.050233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.050316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.050342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.050455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.050480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.050572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.050598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.050693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.050721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.050804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.050829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.050926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.050964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.051052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.051094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.051183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.051208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.051302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.051333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.051413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.051438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.051519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.051544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.051632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.051659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.051745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.051770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.051895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.051934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.052028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.052055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.052151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.052178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.052254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.052280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.052408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.052434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.052520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.052547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.052626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.052652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.052732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.052758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.052857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.052885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.053012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.053040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.053134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.053161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.053247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.053272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.053349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.053374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.053453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.053477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.053556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.053581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.053685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.053709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.053789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.053813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.053927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.053955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.054043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.054079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.054196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.220 [2024-07-25 13:52:53.054222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.220 qpair failed and we were unable to recover it. 00:23:56.220 [2024-07-25 13:52:53.054299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.054325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.054403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.054428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.054539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.054569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.054649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.054676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.054812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.054839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.054940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.054969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.055085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.055110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.055189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.055215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.055298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.055323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.055439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.055465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.055579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.055606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.055727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.055753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.055837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.055862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.055943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.055968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.056085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.056113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.056202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.056229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.056316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.056343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.056456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.056482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.056598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.056624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.056706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.056733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.056822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.056849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.056932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.056957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.057045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.057080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.057169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.057194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.057277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.057303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.057412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.057438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.057528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.057557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.057652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.057678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.057763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.057790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.057906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.057933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.058016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.058044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.058184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.058211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.058298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.058324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.058435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.058460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.058545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.058570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.058657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.058685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.058772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.058799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.058883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.058909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.058990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.059015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.221 [2024-07-25 13:52:53.059105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.221 [2024-07-25 13:52:53.059131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.221 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.059210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.059235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.059320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.059346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.059425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.059455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.059573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.059601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.059701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.059728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.059825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.059864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.059948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.059974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.060095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.060121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.060211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.060237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.060321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.060346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.060458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.060483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.060566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.060591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.060669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.060694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.060771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.060795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.060877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.060901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.061023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.061049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.061158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.061187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.061284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.061310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.061397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.061425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.061505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.061531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.061612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.061638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.061723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.061749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.061890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.061918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.062008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.062037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.062147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.062186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.062306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.062333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.062446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.062471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.062585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.062610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.062709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.062735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.062854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.062898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.062986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.063012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.063102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.063131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.063214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.063240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.063353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.063379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.063489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.063515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.063606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.063632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.063718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.063745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.063859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.063886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.222 qpair failed and we were unable to recover it. 00:23:56.222 [2024-07-25 13:52:53.064002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.222 [2024-07-25 13:52:53.064028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.064126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.064153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.064237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.064263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.064341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.064366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.064451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.064475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.064589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.064614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.064721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.064746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.064825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.064853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.064943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.064970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.065053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.065091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.065205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.065230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.065344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.065369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.065449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.065474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.065556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.065583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.065718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.065745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.065826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.065856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.065969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.065994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.066084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.066111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.066198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.066229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.066345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.066371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.066460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.066486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.066565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.066593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.066709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.066736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.066810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.066836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.066946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.066973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.067051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.067083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.067173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.067200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.067291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.067317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.067431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.067458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.067553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.067579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.067663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.067690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.067769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.067795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.067902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.067940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.068069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.068097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.068184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.068210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.068299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.068324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.068443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.223 [2024-07-25 13:52:53.068471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.223 qpair failed and we were unable to recover it. 00:23:56.223 [2024-07-25 13:52:53.068565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.068591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.068674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.068699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.068811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.068837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.068923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.068950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.069034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.069067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.069179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.069205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.069286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.069312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.069388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.069413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.069527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.069553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.069639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.069664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.069754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.069781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.069865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.069892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.070012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.070050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.070171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.070198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.070282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.070308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.070415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.070441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.070540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.070565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.070652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.070678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.070792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.070817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.070900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.070926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.071045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.071099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.071187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.071219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.071305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.071330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.071442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.071467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.071551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.071576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.071683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.071708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.071797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.071825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.071911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.071937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.072075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.072113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.072213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.072240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.072328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.072354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.072442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.072470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.072554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.072580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.072666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.072691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.072801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.072829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.072916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.072943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.073030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.073065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.073187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.073213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.073327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.224 [2024-07-25 13:52:53.073353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.224 qpair failed and we were unable to recover it. 00:23:56.224 [2024-07-25 13:52:53.073439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.073466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.073563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.073590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.073673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.073701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.073787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.073814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.073896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.073922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.074006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.074033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.074133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.074159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.074250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.074275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.074361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.074386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.074482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.074510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.074629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.074658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.074746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.074773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.074888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.074914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.074994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.075022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.075113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.075141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.075253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.075281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.075381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.075409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.075496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.075521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.075634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.075661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.075736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.075762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.075842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.075868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.075973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.076000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.076121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.076148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.076269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.076295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.076377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.076403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.076481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.076507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.076594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.076619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.076732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.076757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.076846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.076872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.076963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.076991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.077073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.077100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.077187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.077213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.077300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.077326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.077443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.077471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.077556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.077582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.077689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.077715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.077802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.077828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.077904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.077929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.078043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.078079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.078168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.225 [2024-07-25 13:52:53.078195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.225 qpair failed and we were unable to recover it. 00:23:56.225 [2024-07-25 13:52:53.078277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.078302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.078380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.078407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.078507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.078534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.078617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.078643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.078746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.078773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.078851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.078877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.078957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.078983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.079070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.079097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.079207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.079233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.079341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.079377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.079492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.079518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.079595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.079621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.079725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.079765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.079850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.079878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.079990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.080029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.080151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.080179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.080267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.080292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.080381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.080406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.080519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.080546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.080634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.080660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.080744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.080769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.080855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.080881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.080972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.080998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.081087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.081112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.081201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.081226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.081327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.081353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.081459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.081484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.081569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.081595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.081713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.081753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.081846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.081873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.081955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.081982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.082081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.082108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.082203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.082228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.082310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.082336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.082421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.082448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.082556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.082581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.082668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.082696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.082817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.082843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.082935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.082961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.083052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.083087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.083166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.226 [2024-07-25 13:52:53.083192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.226 qpair failed and we were unable to recover it. 00:23:56.226 [2024-07-25 13:52:53.083303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.083329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.083416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.083442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.083530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.083555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.083637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.083662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.083749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.083776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.083873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.083900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.084002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.084041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.084182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.084209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.084293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.084326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.084421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.084447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.084564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.084591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.084673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.084699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.084789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.084828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.084913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.084940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.085026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.085053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.085151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.085176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.085273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.085298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.085382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.085407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.085523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.085548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.085660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.085685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.085824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.085852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.085946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.085973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.086070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.086096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.086182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.086208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.086296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.086321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.086438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.086464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.086543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.086568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.086676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.086702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.086788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.086814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.086925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.086952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.087036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.087068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.087149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.087175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.087268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.087294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.087409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.227 [2024-07-25 13:52:53.087434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.227 qpair failed and we were unable to recover it. 00:23:56.227 [2024-07-25 13:52:53.087519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.087544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.087652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.087683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.087774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.087801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.087889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.087915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.088001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.088027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.088111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.088136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.088215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.088240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.088381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.088406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.088484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.088509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.088584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.088609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.088697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.088726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.088814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.088841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.088918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.088943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.089033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.089066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.089156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.089181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.089261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.089285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.089403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.089430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.089516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.089543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.089624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.089649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.089748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.089773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.089851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.089877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.089961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.089987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.090106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.090133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.090218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.090245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.090333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.090358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.228 [2024-07-25 13:52:53.090441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.228 [2024-07-25 13:52:53.090468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.228 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.090556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.090581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.090695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.090721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.090807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.090832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.090953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.090982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.091070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.091096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.091182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.091210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.091301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.091327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.091442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.091469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.091559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.091585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.091672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.091698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.091821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.091859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.091954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.091982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.092076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.092103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.092185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.092212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.092288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.092313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.092431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.092463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.092566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.092593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.092677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.092702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.092783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.092810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.092925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.092950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.093045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.093090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.093186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.093215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.093298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.093323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.093404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.093429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.093512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.093537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.093618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.093643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.093724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.093750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.093859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.093885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.093998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.094024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.094129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.094156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.094243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.094268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.094347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.094372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.094484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.094511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.094639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.229 [2024-07-25 13:52:53.094677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.229 qpair failed and we were unable to recover it. 00:23:56.229 [2024-07-25 13:52:53.094772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.094800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.094914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.094940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.095023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.095049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.095148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.095174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.095259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.095286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.095373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.095398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.095505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.095531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.095615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.095640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.095723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.095750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.095835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.095860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.095942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.095969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.096086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.096113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.096198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.096224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.096307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.096333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.096418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.096446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.096563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.096589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.096681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.096707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.096788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.096813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.096927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.096953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.097032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.097057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.097156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.097183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.097262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.097297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.097377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.097403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.097514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.097539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.097650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.097675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.097778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.097817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.097903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.097931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.098015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.098044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.098180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.098207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.098303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.098328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.098447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.098475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.098572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.098600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.098682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.098710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.098842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.098881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.230 qpair failed and we were unable to recover it. 00:23:56.230 [2024-07-25 13:52:53.099000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.230 [2024-07-25 13:52:53.099026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.099126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.099152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.099235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.099260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.099344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.099369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.099444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.099469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.099553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.099581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.099667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.099693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.099770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.099796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.099877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.099902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.099990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.100017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.100119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.100147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.100235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.100261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.100340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.100366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.100478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.100503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.100585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.100616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.100700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.100728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.100846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.100871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.100950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.100975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.101049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.101081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.101178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.101204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.101283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.101308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.101392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.101417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.101497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.101523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.101637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.101662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.101746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.101774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.101853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.101879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.102007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.102045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.102141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.102168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.102261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.102287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.102374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.102399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.102478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.102504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.102580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.102604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.102681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.102706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.102811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.102836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.231 [2024-07-25 13:52:53.102918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.231 [2024-07-25 13:52:53.102944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.231 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.103018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.103042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.103134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.103160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.103239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.103264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.103362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.103386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.103472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.103498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.103582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.103607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.103698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.103724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.103873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.103898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.104009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.104033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.104119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.104144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.104257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.104284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.104369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.104395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.104477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.104502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.104588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.104614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.104726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.104755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.104836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.104864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.104963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.105001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.105092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.105119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.105232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.105257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.105339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.105369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.105456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.105481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.105562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.105588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.105667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.105692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.105783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.105812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.105905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.105931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.106011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.106039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.106126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.106152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.232 qpair failed and we were unable to recover it. 00:23:56.232 [2024-07-25 13:52:53.106260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.232 [2024-07-25 13:52:53.106286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.106395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.106420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.106513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.106540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.106624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.106650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.106771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.106798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.106888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.106914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.106997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.107024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.107147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.107173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.107271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.107297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.107387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.107413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.107493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.107519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.107600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.107625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.107725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.107763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.107856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.107884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.107964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.107990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.108078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.108105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.108196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.108222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.108302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.108327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.108404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.108430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.108516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.108543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.108635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.108663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.108750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.108776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.108860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.108886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.108999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.109024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.109126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.109153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.109239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.109264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.109346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.109371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.109455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.109481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.109565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.109590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.109673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.109698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.109813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.109839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.109936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.109969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.110091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.110123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.110215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.110240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.110329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.233 [2024-07-25 13:52:53.110354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.233 qpair failed and we were unable to recover it. 00:23:56.233 [2024-07-25 13:52:53.110442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.110467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.110543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.110568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.110665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.110693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.110774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.110800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.110885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.110910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.110985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.111010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.111110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.111136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.111218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.111243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.111357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.111383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.111499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.111525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.111615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.111640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.111732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.111759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.111868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.111893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.111980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.112007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.112097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.112122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.112208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.112233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.112316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.112341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.112455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.112480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.112567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.112592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.112688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.112727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.112850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.112877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.112957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.112983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.113071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.113096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.113191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.113218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.113298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.113330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.113410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.113435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.113519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.113545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.113627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.113652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.113740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.113768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.113865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.113891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.113992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.114021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.114137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.114164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.114251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.114277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.114389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.114415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.234 qpair failed and we were unable to recover it. 00:23:56.234 [2024-07-25 13:52:53.114498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.234 [2024-07-25 13:52:53.114524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.114603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.114628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.114714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.114739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.114877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.114902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.115025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.115051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.115149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.115174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.115252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.115278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.115362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.115388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.115506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.115535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.115635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.115663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.115771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.115797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.115913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.115939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.116022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.116049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.116152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.116179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.116256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.116282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.116365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.116390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.116469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.116494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.116606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.116631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.116717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.116745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.116837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.116875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.116964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.116992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.117074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.117106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.117185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.117211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.117316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.117342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.117428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.117455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.117543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.117569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.117683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.117709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.117792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.117824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.117935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.117962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.118050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.118084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.118172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.118204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.118290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.118315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.118434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.118459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.118546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.118571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.118652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.235 [2024-07-25 13:52:53.118680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.235 qpair failed and we were unable to recover it. 00:23:56.235 [2024-07-25 13:52:53.118812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.118851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.118975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.119001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.119094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.119120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.119205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.119231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.119308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.119334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.119415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.119440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.119553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.119581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.119674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.119699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.119777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.119803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.119928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.119953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.120034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.120067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.120188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.120213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.120298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.120324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.120403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.120428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.120509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.120535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.120615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.120640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.120759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.120785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.120864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.120889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.120973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.120999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.121083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.121109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.121198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.121224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.121302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.121327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.121420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.121447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.121558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.121583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.121664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.121689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.121809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.121835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.121921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.121950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.122048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.122081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.122182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.122209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.122292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.122318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.122406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.122432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.122545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.122571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.122654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.122680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.122762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.122787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.236 [2024-07-25 13:52:53.122871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.236 [2024-07-25 13:52:53.122896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.236 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.123022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.123047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.123135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.123161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.123260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.123299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.123387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.123414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.123530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.123555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.123635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.123661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.123772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.123796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.123886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.123912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.124002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.124029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.124126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.124155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.124273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.124298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.124395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.124420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.124503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.124528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.124612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.124637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.124730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.124758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.124857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.124895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.124992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.125020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.125128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.125155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.125232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.125257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.125367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.125393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.125505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.125530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.125645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.125670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.125779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.125804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.125905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.125930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.126013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.126038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.126158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.126192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.126276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.126302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.126410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.126441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.237 [2024-07-25 13:52:53.126531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.237 [2024-07-25 13:52:53.126557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.237 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.126673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.126698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.126783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.126811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.126895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.126920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.127003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.127029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.127147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.127173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.127252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.127278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.127357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.127382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.127491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.127517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.127626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.127653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.127738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.127766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.127885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.127910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.128020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.128046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.128137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.128162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.128274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.128301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.128375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.128399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.128482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.128509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.128624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.128649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.128729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.128755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.128862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.128887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.128971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.128997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.129091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.129129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.129224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.129250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.129336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.129361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.129479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.129504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.129620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.129647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.129766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.129792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.129871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.129897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.129986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.130012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.130136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.130162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.130239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.130263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.130365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.130391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.130534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.130559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.130652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.130680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.130760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.238 [2024-07-25 13:52:53.130785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.238 qpair failed and we were unable to recover it. 00:23:56.238 [2024-07-25 13:52:53.130871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.130896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.131011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.131036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.131155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.131193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.131277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.131304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.131417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.131448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.131532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.131557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.131639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.131664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.131744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.131770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.131856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.131881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.131972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.132013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.132147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.132174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.132255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.132280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.132372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.132398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.132483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.132507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.132623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.132649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.132795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.132820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.132934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.132960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.133043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.133077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.133208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.133234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.133312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.133339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.133424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.133449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.133528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.133553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.133665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.133690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.133773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.133799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.133878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.133903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.134066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.134092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.134178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.134203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.134321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.134347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.134432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.134457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.134594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.134620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.134710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.134736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.134823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.134853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.134939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.134967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.135049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.135081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.239 [2024-07-25 13:52:53.135198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.239 [2024-07-25 13:52:53.135224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.239 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.135307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.135332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.135420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.135446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.135546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.135584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.135669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.135697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.135790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.135827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.135929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.135956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.136040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.136073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.136165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.136190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.136299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.136327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.136416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.136441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.136560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.136587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.136668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.136693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.136816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.136846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.136960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.136987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.137082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.137114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.137196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.137221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.137352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.137378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.137453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.137479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.137563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.137590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.137682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.137710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.137807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.137845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.137964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.137991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.138078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.138105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.138191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.138218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.138297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.138322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.138433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.138458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.138538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.138563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.138639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.138664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.138741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.138769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.138864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.138902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.138998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.139025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.139166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.139192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.139279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.139305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.139393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.240 [2024-07-25 13:52:53.139419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.240 qpair failed and we were unable to recover it. 00:23:56.240 [2024-07-25 13:52:53.139509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.139537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.139625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.139652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.139804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.139841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.139928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.139953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.140041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.140072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.140169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.140194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.140303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.140328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.140451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.140478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.140563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.140590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.140677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.140702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.140785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.140811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.140891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.140918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.140999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.141025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.141127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.141154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.141236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.141261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.141347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.141373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.141458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.141484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.141578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.141605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.141688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.141713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.141794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.141820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.141926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.141951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.142035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.142065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.142156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.142181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.142262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.142286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.142375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.142401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.142518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.142545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.142621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.142647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.142732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.142758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.142839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.142865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.142994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.143035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.241 [2024-07-25 13:52:53.143134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.241 [2024-07-25 13:52:53.143159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.241 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.143244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.143271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.143393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.143418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.143495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.143521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.143603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.143628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.143711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.143738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.143818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.143844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.143951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.143977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.144092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.144122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.144206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.144232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.144352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.144378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.144459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.144486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.144568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.144594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.144714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.144740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.144824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.144850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.144926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.144951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.145035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.145066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.145158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.145183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.145262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.145287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.145409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.145433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.145513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.145538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.145623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.145648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.145743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.145781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.145869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.145896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.146022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.146069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.146167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.146194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.146277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.146302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.146414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.146440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.146516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.146541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.242 qpair failed and we were unable to recover it. 00:23:56.242 [2024-07-25 13:52:53.146622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.242 [2024-07-25 13:52:53.146647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.146745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.146785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.146875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.146903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.146992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.147021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.147117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.147144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.147250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.147276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.147368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.147394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.147503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.147528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.147608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.147633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.147727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.147755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.147895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.147927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.148010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.148036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.148135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.148162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.148244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.148270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.148381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.148406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.148489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.148513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.148599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.148623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.148718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.148743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.148834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.148861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.148967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.149004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.149102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.149129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.149209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.149235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.149308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.149332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.149440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.149466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.149556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.149581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.149665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.149689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.149767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.149791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.149905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.149933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.150015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.150040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.150140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.150166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.150249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.150274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.243 [2024-07-25 13:52:53.150424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.243 [2024-07-25 13:52:53.150452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.243 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.150542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.150568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.150658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.150685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.150765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.150790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.150869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.150894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.151004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.151029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.151129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.151162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.151245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.151270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.151346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.151372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.151455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.151480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.151565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.151591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.151703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.151728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.151818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.151845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.151927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.151952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.152038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.152070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.152165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.152191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.152305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.152330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.152414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.152439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.152538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.152563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.152644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.152669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.152756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.152781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.152861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.152887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.152970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.152996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.153125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.153165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.153266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.153304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.153388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.153415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.153498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.153523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.153624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.153649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.153729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.153754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.153868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.153893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.153975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.154000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.154085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.154110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.154194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.154218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.154308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.154347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.154453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.244 [2024-07-25 13:52:53.154491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.244 qpair failed and we were unable to recover it. 00:23:56.244 [2024-07-25 13:52:53.154576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.154603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.154692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.154719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.154834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.154860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.154952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.154977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.155085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.155111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.155189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.155215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.155298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.155324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.155408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.155435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.155526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.155552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.155657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.155682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.155773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.155798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.155881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.155911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.155990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.156014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.156113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.156140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.156220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.156245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.156332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.156359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.156473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.156498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.156597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.156636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.156721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.156748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.156836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.156862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.156974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.156999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.157090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.157116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.157196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.157221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.157297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.157322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.157398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.157422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.157513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.157541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.157652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.157679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.157764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.157789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.157872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.157897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.157980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.158005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.158104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.245 [2024-07-25 13:52:53.158142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.245 qpair failed and we were unable to recover it. 00:23:56.245 [2024-07-25 13:52:53.158237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.158263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.158353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.158378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.158453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.158478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.158557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.158582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.158669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.158696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.158789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.158816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.158924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.158949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.159032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.159067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.159164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.159190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.159272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.159298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.159421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.159446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.159560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.159585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.159666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.159691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.159808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.159833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.159918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.159945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.160032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.160064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.160155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.160181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.160260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.160286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.160400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.160425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.160511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.160537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.160629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.160656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.160757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.160795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.160895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.160922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.161010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.161035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.161145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.161171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.161252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.161277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.161365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.161390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.161471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.161496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.161571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.161596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.161685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.161713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.161796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.161824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.161943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.161972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.162071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.162107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.162189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.162214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.246 qpair failed and we were unable to recover it. 00:23:56.246 [2024-07-25 13:52:53.162301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.246 [2024-07-25 13:52:53.162328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.162408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.162433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.162507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.162533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.162616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.162642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.162717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.162742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.162825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.162854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.162941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.162967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.163082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.163112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.247 [2024-07-25 13:52:53.163196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.163221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:56.247 [2024-07-25 13:52:53.163309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.163335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.163417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.247 [2024-07-25 13:52:53.163441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.163529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.163556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:56.247 [2024-07-25 13:52:53.163640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.163665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.247 [2024-07-25 13:52:53.163755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.163783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.163861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.163887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.163980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.164006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.164095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.164121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.164211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.164237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.164323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.164357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.164437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.164463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.164550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.164575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.164715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.164740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.164826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.164854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.164970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.164996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.165097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.165121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.165213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.165239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.165320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.165345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.165430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.165455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.165536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.165560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.247 qpair failed and we were unable to recover it. 00:23:56.247 [2024-07-25 13:52:53.165669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.247 [2024-07-25 13:52:53.165693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.165771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.165795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.165877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.165902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.165985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.166012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.166115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.166149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.166234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.166262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.166355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.166381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.166488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.166514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.166594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.166620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.166704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.166735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.166825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.166852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.166935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.166960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.167085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.167111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.167193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.167218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.167299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.167325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.167440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.167468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.167561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.167586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.167668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.167694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.167774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.167799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.167907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.167934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.168014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.168040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.168141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.168168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.168253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.168278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.168373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.168399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.168483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.168509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.168625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.168651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.168769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.168794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.168880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.168907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.169020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.248 [2024-07-25 13:52:53.169072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.248 qpair failed and we were unable to recover it. 00:23:56.248 [2024-07-25 13:52:53.169159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.169185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.169266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.169291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.169379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.169403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.169518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.169544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.169631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.169658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.169740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.169765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.169879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.169905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.169994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.170020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.170123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.170149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.170225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.170250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.170330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.170354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.170444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.170469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.170575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.170599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.170677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.170702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.170779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.170804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.170913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.170939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.171020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.171045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.171136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.171161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.171242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.171267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.171345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.171370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.171490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.171529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.171627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.171654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.171734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.171762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.171848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.171873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.171947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.171975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.172084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.172111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.172196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.172221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.172295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.172320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.172411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.172439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.172554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.172581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.172691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.172716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.172795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.172820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.172926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.172950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.173032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.173057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.249 qpair failed and we were unable to recover it. 00:23:56.249 [2024-07-25 13:52:53.173153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.249 [2024-07-25 13:52:53.173181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.173265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.173291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.173372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.173397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.173496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.173523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.173601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.173626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.173715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.173741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.173824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.173849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.173930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.173954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.174034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.174063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.174148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.174172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.174253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.174278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.174368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.174394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.174481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.174506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.174590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.174621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.174729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.174754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.174871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.174898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.174984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.175012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.175107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.175133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.175219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.175246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.175330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.175355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.175437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.175462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.175550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.175578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.175690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.175717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.175794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.175819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.175897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.175923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.176004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.176030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.176146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.176183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.176337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.176364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.176473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.176499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.176584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.176610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.176722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.176747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.176849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.176876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.176957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.176984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.177107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.177134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.177229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.250 [2024-07-25 13:52:53.177253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.250 qpair failed and we were unable to recover it. 00:23:56.250 [2024-07-25 13:52:53.177352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.177377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.177493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.177518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.177631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.177658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.177743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.177768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.177887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.177913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.177994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.178024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.178131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.178158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.178245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.178270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.178361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.178386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.178532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.178557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.178635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.178660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.178737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.178762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.178841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.178867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.178956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.178986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.179074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.179113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.179195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.179220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.179296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.179322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.179405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.179430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.179527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.179568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.179662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.179690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.179779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.179804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.179884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.179910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.179989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.180014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.180139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.180165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.180249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.180275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.180365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.180391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.180468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.180493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.180613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.180642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.180727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.180753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.180837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.180862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.180970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.181008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.181124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.181149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.181232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.181258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.181348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.181373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.251 [2024-07-25 13:52:53.181458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.251 [2024-07-25 13:52:53.181484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.251 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.181607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.181632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.181708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.181732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.181819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.181845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.181923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.181948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.182067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.182093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.182182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.182207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.182287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.182313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.182400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.182425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.182504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.182530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.182611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.182636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.182755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.182785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.182872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.182897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.182983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.183008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.183092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.183126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.183206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.183232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.183324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.183349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.183430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.183456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.183534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.183559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.183639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.183665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.183772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.183798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.183914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.183939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.184022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.184047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.184149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.184189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.184304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.184331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.184421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.184446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.184522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.184547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.184692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.184720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.184806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.184832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.184915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.184941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.185065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.185091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.185182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.185207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.185286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.185312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.185396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.185421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.185540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.252 [2024-07-25 13:52:53.185566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.252 qpair failed and we were unable to recover it. 00:23:56.252 [2024-07-25 13:52:53.185660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.185688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.185775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.185801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.185933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.185971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.186113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.186142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.186249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.186287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.253 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.186392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.186420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.186536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.186565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:56.253 [2024-07-25 13:52:53.186639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.186666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.186753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.186779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.186859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.253 [2024-07-25 13:52:53.186887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.186980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.187005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.253 [2024-07-25 13:52:53.187086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.187113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.187199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.187224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.187313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.187338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.187420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.253 [2024-07-25 13:52:53.187450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.253 qpair failed and we were unable to recover it. 00:23:56.253 [2024-07-25 13:52:53.187540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.187566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.187688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.187717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.187849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.187887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.188007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.188033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.188135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.188161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.188242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.188267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.188345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.188370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.188449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.188473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.188583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.188608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.188693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.188721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.188815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.188850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.188997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.189025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.189122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.189149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.189246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.189273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.189357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.189382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.189460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.189485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.189566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.189594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.189676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.189701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.189815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.189841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.189921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.189947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.190035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.190075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.190177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.190203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.190302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.190328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.190446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.190472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.190580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.190606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.190691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.190719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.190798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.190825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.190907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.190933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.191053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.191085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.191178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.191204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.191287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.191314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.191394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.191420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.191538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.191566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.191652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.191678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.191764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.191791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.191877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.191902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.192014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.192040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.192133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.192158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.192249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.192275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.192387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.515 [2024-07-25 13:52:53.192416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.515 qpair failed and we were unable to recover it. 00:23:56.515 [2024-07-25 13:52:53.192530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.192556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.192640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.192668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.192775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.192800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.192914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.192940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.193024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.193049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.193139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.193165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.193249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.193274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.193355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.193380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.193466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.193490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.193570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.193595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.193672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.193698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.193801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.193840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.193944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.193982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.194099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.194128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.194209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.194235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.194325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.194351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.194434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.194460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.194543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.194572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.194666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.194691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.194794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.194820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.194895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.194919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.194998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.195023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.195114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.195147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.195233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.195259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.195340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.195371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.195465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.195490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.195575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.195602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.195693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.195719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.195803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.195829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.195951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.195977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.196064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.196089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.196181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.196207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.196289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.196314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.196391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.196417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.196542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.196568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.196645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.196670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.196753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.196779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.196863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.196888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.196969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.196995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.197087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.197112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.197199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.197224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.197333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.197358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.197437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.197462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.197562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.197588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.197675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.197700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.197808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.197834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.197914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.197939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.198016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.198042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.198149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.198175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.198258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.198285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.198378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.198404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.198542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.198568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.198648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.198674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.198765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.198794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.198902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.198941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.199064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.199090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.199184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.199208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.199295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.199319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.199435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.199461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.199544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.199568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.199656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.199681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.199761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.199786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.199865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.199889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.199980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.200005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.200123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.200162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.516 [2024-07-25 13:52:53.200289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.516 [2024-07-25 13:52:53.200326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.516 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.200422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.200456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.200540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.200566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.200682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.200710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.200796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.200822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.200907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.200934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.201013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.201038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.201151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.201177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.201256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.201280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.201394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.201418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.201507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.201531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.201625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.201653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.201741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.201767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.201882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.201908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.202018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.202043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.202163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.202188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.202264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.202289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.202403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.202429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.202524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.202553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.202643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.202669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.202793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.202818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.202904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.202931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.203080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.203107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.203190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.203217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.203301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.203329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.203421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.203460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.203562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.203588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.203705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.203732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.203828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.203855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.203954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.203992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.204111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.204137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.204223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.204248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.204331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.204358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.204478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.204503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.204615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.204639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.204718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.204743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.204843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.204870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.204970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.204997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.205119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.205145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.205222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.205246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.205344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.205369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.205461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.205485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.205591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.205617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.205700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.205728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.205813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.205838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.205926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.205955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.206045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.206080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.206171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.206197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.206278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.206304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.206418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.206444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.206552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.206578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.206656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.206682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.206767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.206795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.206917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.206958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.207044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.207079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.207168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.207195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.207305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.207331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.207439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.207465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.207556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.207582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.207665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.207691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.207808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.207834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.207920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.207947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.208029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.208070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.208175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.208215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.517 [2024-07-25 13:52:53.208304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.517 [2024-07-25 13:52:53.208331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.517 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.208446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.208473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.208585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.208611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.208735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.208775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.208867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.208894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.208990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.209016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.209115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.209142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.209253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.209279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.209356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.209382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.209469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.209496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.209584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.209611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.209719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.209745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.209826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.209852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.209926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.209951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.210031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.210056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.210144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.210170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.210287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.210313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.210428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.210455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.210547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.210581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.210701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.210730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.210822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.210848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.210932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.210959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.211049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.211085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.211195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.211222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.211330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.211356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.211439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.211466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.211557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.211584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.211669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.211696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.211803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.211830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.211913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.211939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.212048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.212082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.212162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.212194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.212310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.212336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.212421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.212448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.212527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.212553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.212636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.212661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.212778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.212806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.212911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.212951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.213041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.213075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.213158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.213185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.213279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.213304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.213385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.213411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.213491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.213518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.213626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.213652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.213748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 Malloc0 00:23:56.518 [2024-07-25 13:52:53.213775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.213868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.213894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.214007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.214033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.518 [2024-07-25 13:52:53.214128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.214155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.214247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.214272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:56.518 [2024-07-25 13:52:53.214361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.214387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.518 [2024-07-25 13:52:53.214468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.214494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 [2024-07-25 13:52:53.214585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.214612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.214689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.214716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.214813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.214839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.214954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.214983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.215069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.215094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.215175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.215206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.518 [2024-07-25 13:52:53.215307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.518 [2024-07-25 13:52:53.215332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.518 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.215413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.215439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.215527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.215553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.215645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.215673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.215757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.215784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.215904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.215931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.216018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.216044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.216146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.216186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.216284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.216323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.216441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.216469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.216557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.216583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.216691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.216718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.216807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.216832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.216917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.216944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.217076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.217104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.217193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.217220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.217303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.217330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.217439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.217465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.217467] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.519 [2024-07-25 13:52:53.217572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.217597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.217681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.217708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.217814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.217840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.217927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.217958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.218052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.218095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.218214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.218242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.218332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.218359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.218442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.218469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.218569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.218596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.218679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.218707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.218789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.218815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.218927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.218955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.219043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.219078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.219170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.219195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.219285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.219311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.219407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.219433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.219545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.219571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.219652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.219680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.219760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.219787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.219867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.219893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.219979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.220005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.220092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.220124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.220208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.220235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.220317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.220342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.220425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.220450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.220536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.220562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.220670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.220696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.220794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.220823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.220909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.220935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.221056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.221090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.221205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.221231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.221328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.221355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.221443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.221470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.221584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.221612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.221709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.221747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.221848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.221875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.221989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.222016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.222104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.222130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.222218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.222245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.222360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.222386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.222467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.222494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.222583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.222609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.222750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.222777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.222864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.222891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.222985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.223016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.223115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.223143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.223225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.223251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.223339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.223366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.519 qpair failed and we were unable to recover it. 00:23:56.519 [2024-07-25 13:52:53.223452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.519 [2024-07-25 13:52:53.223480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.223563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.223591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.223710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.223737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.223833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.223862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.223971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.223998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.224087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.224114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.224229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.224255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.224342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.224369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.224454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.224481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.224564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.224591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.224707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.224734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.224823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.224852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.224938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.224965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.225048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.225087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.225182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.225208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.225302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.225328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.225411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.225437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.225526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.225552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.225638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.225664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.520 [2024-07-25 13:52:53.225803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:56.520 [2024-07-25 13:52:53.225830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.225915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.520 [2024-07-25 13:52:53.225942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.226029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.520 [2024-07-25 13:52:53.226055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.226147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.226174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.226272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.226300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.226410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.226438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.226525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.226552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.226638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.226665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.226748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.226774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.226853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.226878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.226969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.226997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.227146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.227185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.227280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.227307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.227387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.227414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.227500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.227527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.227643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.227670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.227787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.227814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.227895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.227922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.228005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.228031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.228122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.228154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.228239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.228265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.228339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.228364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.228440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.228466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.228572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.228599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.228685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.228711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.228794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.228821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.228928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.228954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.229070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.229097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.229180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.229206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.229290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.229315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.229391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.229416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.229490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.229516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.229603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.229629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.229728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.229755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.229838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.229864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.229949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.229975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.230053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.230087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.230162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.230188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.230274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.230302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.520 [2024-07-25 13:52:53.230381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.520 [2024-07-25 13:52:53.230408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.520 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.230514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.230554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.230638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.230666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.230745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.230772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.230847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.230873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.230951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.230978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.231055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.231087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.231168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.231200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.231288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.231316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.231394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.231421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.231538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.231564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.231651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.231677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.231760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.231786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.231898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.231925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.232001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.232027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.232115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.232142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.232222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.232248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.232338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.232366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.232458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.232486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.232570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.232597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.232681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.232708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.232823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.232851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.232967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.232995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.233085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.233113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.233228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.233255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.233337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.233364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.233449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.233477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.233564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.233590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.233664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.233690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.521 [2024-07-25 13:52:53.233762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.233788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:56.521 [2024-07-25 13:52:53.233896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.233922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.521 [2024-07-25 13:52:53.234018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.234046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.521 [2024-07-25 13:52:53.234143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.234175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.234259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.234286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.234364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.234391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.234481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.234508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.234590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.234617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.234726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.234755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.234841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.234867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.234949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.234975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.235067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.235094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.235174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.235200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.235289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.235314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.235395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.235420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.235499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.235524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.235604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.235630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.235757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.235785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.235874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.235903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.235982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.236008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.236101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.236128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.236209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.236234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.236323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.236349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.236454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.236479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.236562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.236591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.236677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.236704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.236831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.236859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.236938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.236965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.237049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.237081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.237161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.237187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.237266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.237295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.237406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.237431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.237510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.237535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.237648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.237674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.237752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.237778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.237868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.237897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.521 qpair failed and we were unable to recover it. 00:23:56.521 [2024-07-25 13:52:53.238017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.521 [2024-07-25 13:52:53.238046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.238142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.238167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.238275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.238301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.238380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.238406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.238491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.238518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.238600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.238627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.238723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.238763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.238877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.238905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.238997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.239025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.239152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.239180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.239271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.239298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.239378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.239406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.239516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.239543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.239629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.239656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.239746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.239774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.239878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.239917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.240004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.240032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.240129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.240156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.240245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.240272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.240356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.240383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.240470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.240496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.240583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.240616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.240698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.240724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.240805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.240832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.240943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.240969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.241052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.241085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.241169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.241195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.241276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.241303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.241383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.241409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.241488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.241514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.241623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.241650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.241728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.241754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b9 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.522 0 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.522 [2024-07-25 13:52:53.241893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.241926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.522 [2024-07-25 13:52:53.242013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.242050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.522 [2024-07-25 13:52:53.242156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.242183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.242265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.242291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.242371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.242397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.242476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.242502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.242585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.242611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.242686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.242712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.242792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.242818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.242894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.242920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.243003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.243032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.243125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.243153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.243231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.243257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.243346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.243372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.243475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.243519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.243637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.243665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.243747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.243774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.243860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.243887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.244027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.244054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.244144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.244170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.244259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.244287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.244370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.244396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.244482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.244509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.244624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.244651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c88000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.244747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.244778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.244867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.244895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.244980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.245007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.245121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.245154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c98000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.245251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.245290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118b250 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.245375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.245403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.245486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.245512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.245595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:56.522 [2024-07-25 13:52:53.245622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3c90000b90 with addr=10.0.0.2, port=4420 00:23:56.522 qpair failed and we were unable to recover it. 00:23:56.522 [2024-07-25 13:52:53.245709] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.522 [2024-07-25 13:52:53.248115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.248220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.248247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.248263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.248276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.248311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.523 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:56.523 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.523 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.523 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.523 13:52:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 665220 00:23:56.523 [2024-07-25 13:52:53.258008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.258106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.258132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.258147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.258160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.258191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.268057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.268158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.268184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.268200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.268213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.268243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.278095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.278194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.278223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.278239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.278252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.278284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.288031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.288129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.288156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.288171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.288184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.288213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.298071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.298161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.298186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.298202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.298214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.298245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.308090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.308178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.308208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.308224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.308238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.308268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.318153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.318247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.318275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.318291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.318304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.318334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.328115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.328208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.328233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.328249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.328262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.328291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.338140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.338243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.338270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.338285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.338297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.338327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.348220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.348358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.348384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.348400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.348413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.348462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.358195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.358287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.358311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.358326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.358339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.358368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.368335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.368423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.368452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.368467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.368480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.368510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.378292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.378376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.378401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.378415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.378428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.378458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.388288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.388389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.388420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.388435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.388448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.388478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.398365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.398459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.398489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.398505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.398518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.398563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.408444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.408537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.408562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.408576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.408589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.408619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.418403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.418488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.418513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.418528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.418541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.418570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.428401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.428486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.428511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.428526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.428539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.428568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.438440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.438531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.438556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.438571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.438588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.438618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.523 qpair failed and we were unable to recover it. 00:23:56.523 [2024-07-25 13:52:53.448471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.523 [2024-07-25 13:52:53.448558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.523 [2024-07-25 13:52:53.448583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.523 [2024-07-25 13:52:53.448599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.523 [2024-07-25 13:52:53.448612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.523 [2024-07-25 13:52:53.448641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.524 [2024-07-25 13:52:53.458550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.524 [2024-07-25 13:52:53.458643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.524 [2024-07-25 13:52:53.458671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.524 [2024-07-25 13:52:53.458688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.524 [2024-07-25 13:52:53.458701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.524 [2024-07-25 13:52:53.458731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.524 [2024-07-25 13:52:53.468653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.524 [2024-07-25 13:52:53.468742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.524 [2024-07-25 13:52:53.468768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.524 [2024-07-25 13:52:53.468782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.524 [2024-07-25 13:52:53.468795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.524 [2024-07-25 13:52:53.468839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.524 [2024-07-25 13:52:53.478536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.524 [2024-07-25 13:52:53.478664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.524 [2024-07-25 13:52:53.478691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.524 [2024-07-25 13:52:53.478706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.524 [2024-07-25 13:52:53.478719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.524 [2024-07-25 13:52:53.478749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.524 [2024-07-25 13:52:53.488588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.524 [2024-07-25 13:52:53.488687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.524 [2024-07-25 13:52:53.488712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.524 [2024-07-25 13:52:53.488727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.524 [2024-07-25 13:52:53.488740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.524 [2024-07-25 13:52:53.488769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.524 [2024-07-25 13:52:53.498606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.524 [2024-07-25 13:52:53.498704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.524 [2024-07-25 13:52:53.498729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.524 [2024-07-25 13:52:53.498744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.524 [2024-07-25 13:52:53.498756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.524 [2024-07-25 13:52:53.498785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.524 [2024-07-25 13:52:53.508633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.524 [2024-07-25 13:52:53.508719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.524 [2024-07-25 13:52:53.508743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.524 [2024-07-25 13:52:53.508758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.524 [2024-07-25 13:52:53.508771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.524 [2024-07-25 13:52:53.508800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.524 [2024-07-25 13:52:53.518750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.524 [2024-07-25 13:52:53.518841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.524 [2024-07-25 13:52:53.518866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.524 [2024-07-25 13:52:53.518881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.524 [2024-07-25 13:52:53.518893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.524 [2024-07-25 13:52:53.518923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.524 [2024-07-25 13:52:53.528715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.524 [2024-07-25 13:52:53.528813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.524 [2024-07-25 13:52:53.528841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.524 [2024-07-25 13:52:53.528858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.524 [2024-07-25 13:52:53.528875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.524 [2024-07-25 13:52:53.528906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.524 [2024-07-25 13:52:53.538727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.524 [2024-07-25 13:52:53.538809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.524 [2024-07-25 13:52:53.538834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.524 [2024-07-25 13:52:53.538849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.524 [2024-07-25 13:52:53.538862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.524 [2024-07-25 13:52:53.538892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.524 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.548769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.783 [2024-07-25 13:52:53.548857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.783 [2024-07-25 13:52:53.548881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.783 [2024-07-25 13:52:53.548896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.783 [2024-07-25 13:52:53.548910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.783 [2024-07-25 13:52:53.548939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.783 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.558774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.783 [2024-07-25 13:52:53.558863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.783 [2024-07-25 13:52:53.558888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.783 [2024-07-25 13:52:53.558903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.783 [2024-07-25 13:52:53.558916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.783 [2024-07-25 13:52:53.558958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.783 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.568839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.783 [2024-07-25 13:52:53.568936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.783 [2024-07-25 13:52:53.568961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.783 [2024-07-25 13:52:53.568975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.783 [2024-07-25 13:52:53.568988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.783 [2024-07-25 13:52:53.569017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.783 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.578845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.783 [2024-07-25 13:52:53.578932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.783 [2024-07-25 13:52:53.578957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.783 [2024-07-25 13:52:53.578972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.783 [2024-07-25 13:52:53.578985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.783 [2024-07-25 13:52:53.579014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.783 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.588859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.783 [2024-07-25 13:52:53.588942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.783 [2024-07-25 13:52:53.588967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.783 [2024-07-25 13:52:53.588981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.783 [2024-07-25 13:52:53.588994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.783 [2024-07-25 13:52:53.589024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.783 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.598890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.783 [2024-07-25 13:52:53.598979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.783 [2024-07-25 13:52:53.599004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.783 [2024-07-25 13:52:53.599018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.783 [2024-07-25 13:52:53.599031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.783 [2024-07-25 13:52:53.599067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.783 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.608903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.783 [2024-07-25 13:52:53.608988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.783 [2024-07-25 13:52:53.609012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.783 [2024-07-25 13:52:53.609027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.783 [2024-07-25 13:52:53.609040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.783 [2024-07-25 13:52:53.609078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.783 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.618944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.783 [2024-07-25 13:52:53.619027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.783 [2024-07-25 13:52:53.619052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.783 [2024-07-25 13:52:53.619080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.783 [2024-07-25 13:52:53.619095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.783 [2024-07-25 13:52:53.619137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.783 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.628988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.783 [2024-07-25 13:52:53.629075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.783 [2024-07-25 13:52:53.629101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.783 [2024-07-25 13:52:53.629116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.783 [2024-07-25 13:52:53.629128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.783 [2024-07-25 13:52:53.629158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.783 qpair failed and we were unable to recover it. 00:23:56.783 [2024-07-25 13:52:53.639091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.639183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.639208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.639224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.639236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.639279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.649027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.649124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.649149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.649164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.649177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.649207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.659048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.659147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.659172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.659186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.659199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.659228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.669084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.669170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.669195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.669210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.669222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.669252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.679137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.679229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.679253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.679268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.679281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.679310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.689211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.689308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.689332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.689347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.689360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.689389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.699177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.699264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.699289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.699303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.699316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.699345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.709196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.709279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.709308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.709324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.709337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.709366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.719228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.719319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.719344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.719359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.719372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.719401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.729268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.729361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.729385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.729400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.729413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.729442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.739315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.739398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.739425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.739441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.739454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.739484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.749300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.784 [2024-07-25 13:52:53.749400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.784 [2024-07-25 13:52:53.749425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.784 [2024-07-25 13:52:53.749440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.784 [2024-07-25 13:52:53.749453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.784 [2024-07-25 13:52:53.749488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.784 qpair failed and we were unable to recover it. 00:23:56.784 [2024-07-25 13:52:53.759338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.785 [2024-07-25 13:52:53.759446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.785 [2024-07-25 13:52:53.759473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.785 [2024-07-25 13:52:53.759488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.785 [2024-07-25 13:52:53.759501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.785 [2024-07-25 13:52:53.759530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.785 qpair failed and we were unable to recover it. 00:23:56.785 [2024-07-25 13:52:53.769376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.785 [2024-07-25 13:52:53.769469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.785 [2024-07-25 13:52:53.769493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.785 [2024-07-25 13:52:53.769508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.785 [2024-07-25 13:52:53.769520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.785 [2024-07-25 13:52:53.769550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.785 qpair failed and we were unable to recover it. 00:23:56.785 [2024-07-25 13:52:53.779378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.785 [2024-07-25 13:52:53.779465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.785 [2024-07-25 13:52:53.779489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.785 [2024-07-25 13:52:53.779503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.785 [2024-07-25 13:52:53.779516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.785 [2024-07-25 13:52:53.779546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.785 qpair failed and we were unable to recover it. 00:23:56.785 [2024-07-25 13:52:53.789441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.785 [2024-07-25 13:52:53.789522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.785 [2024-07-25 13:52:53.789546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.785 [2024-07-25 13:52:53.789561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.785 [2024-07-25 13:52:53.789574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.785 [2024-07-25 13:52:53.789603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.785 qpair failed and we were unable to recover it. 00:23:56.785 [2024-07-25 13:52:53.799560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.785 [2024-07-25 13:52:53.799653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.785 [2024-07-25 13:52:53.799687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.785 [2024-07-25 13:52:53.799705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.785 [2024-07-25 13:52:53.799719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.785 [2024-07-25 13:52:53.799750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.785 qpair failed and we were unable to recover it. 00:23:56.785 [2024-07-25 13:52:53.809599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:56.785 [2024-07-25 13:52:53.809731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:56.785 [2024-07-25 13:52:53.809759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:56.785 [2024-07-25 13:52:53.809775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:56.785 [2024-07-25 13:52:53.809787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:56.785 [2024-07-25 13:52:53.809817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:56.785 qpair failed and we were unable to recover it. 00:23:57.044 [2024-07-25 13:52:53.819604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.044 [2024-07-25 13:52:53.819697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.044 [2024-07-25 13:52:53.819722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.044 [2024-07-25 13:52:53.819737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.044 [2024-07-25 13:52:53.819749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.044 [2024-07-25 13:52:53.819779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.044 qpair failed and we were unable to recover it. 00:23:57.044 [2024-07-25 13:52:53.829588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.829703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.829730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.829746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.829758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.829788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.839607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.839702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.839727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.839741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.839754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.839788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.849591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.849687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.849713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.849728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.849740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.849770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.859642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.859728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.859753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.859768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.859781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.859817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.869661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.869752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.869776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.869791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.869804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.869834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.879679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.879780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.879807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.879823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.879836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.879865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.889742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.889861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.889891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.889908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.889921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.889952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.899747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.899836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.899861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.899875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.899888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.899917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.909756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.909851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.909876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.909891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.909904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.909934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.919815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.919915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.919939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.919954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.919967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.919995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.929868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.929968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.929995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.930013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.930032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.930078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.045 [2024-07-25 13:52:53.939843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.045 [2024-07-25 13:52:53.939933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.045 [2024-07-25 13:52:53.939959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.045 [2024-07-25 13:52:53.939973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.045 [2024-07-25 13:52:53.939986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.045 [2024-07-25 13:52:53.940015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.045 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:53.949957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:53.950072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:53.950098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:53.950113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:53.950126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:53.950156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:53.959913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:53.960007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:53.960033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:53.960065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:53.960080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:53.960111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:53.969970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:53.970099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:53.970126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:53.970142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:53.970155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:53.970184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:53.979938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:53.980031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:53.980074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:53.980090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:53.980103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:53.980132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:53.990055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:53.990152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:53.990178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:53.990193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:53.990206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:53.990235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:54.000029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:54.000144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:54.000168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:54.000183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:54.000195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:54.000224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:54.010068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:54.010175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:54.010201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:54.010216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:54.010228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:54.010258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:54.020116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:54.020205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:54.020233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:54.020261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:54.020275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:54.020306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:54.030171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:54.030264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:54.030291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:54.030307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:54.030320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:54.030361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:54.040243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:54.040389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:54.040413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:54.040428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:54.040442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:54.040471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:54.050320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:54.050424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:54.050449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:54.050464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:54.050477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:54.050506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.046 [2024-07-25 13:52:54.060254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.046 [2024-07-25 13:52:54.060344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.046 [2024-07-25 13:52:54.060368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.046 [2024-07-25 13:52:54.060384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.046 [2024-07-25 13:52:54.060397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.046 [2024-07-25 13:52:54.060426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.046 qpair failed and we were unable to recover it. 00:23:57.047 [2024-07-25 13:52:54.070237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.047 [2024-07-25 13:52:54.070337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.047 [2024-07-25 13:52:54.070363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.047 [2024-07-25 13:52:54.070379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.047 [2024-07-25 13:52:54.070392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.047 [2024-07-25 13:52:54.070421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.047 qpair failed and we were unable to recover it. 00:23:57.307 [2024-07-25 13:52:54.080242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.307 [2024-07-25 13:52:54.080336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.307 [2024-07-25 13:52:54.080363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.307 [2024-07-25 13:52:54.080377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.307 [2024-07-25 13:52:54.080390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.307 [2024-07-25 13:52:54.080420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.307 qpair failed and we were unable to recover it. 00:23:57.307 [2024-07-25 13:52:54.090280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.307 [2024-07-25 13:52:54.090387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.307 [2024-07-25 13:52:54.090417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.307 [2024-07-25 13:52:54.090433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.307 [2024-07-25 13:52:54.090446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.307 [2024-07-25 13:52:54.090475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.307 qpair failed and we were unable to recover it. 00:23:57.307 [2024-07-25 13:52:54.100300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.307 [2024-07-25 13:52:54.100394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.307 [2024-07-25 13:52:54.100420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.307 [2024-07-25 13:52:54.100436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.307 [2024-07-25 13:52:54.100449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.307 [2024-07-25 13:52:54.100478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.307 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.110306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.110398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.110424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.110445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.110459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.110488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.120447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.120546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.120573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.120588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.120601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.120632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.130389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.130484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.130508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.130523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.130536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.130565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.140475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.140575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.140599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.140614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.140627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.140656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.150440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.150558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.150584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.150599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.150612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.150641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.160499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.160588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.160613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.160629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.160641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.160671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.170486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.170572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.170597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.170612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.170625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.170653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.180512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.180608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.180633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.180648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.180660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.180689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.190535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.190632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.190657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.190672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.190684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.190713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.200586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.200675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.200705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.200721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.200734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.200763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.210604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.210701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.210725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.210740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.210752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.308 [2024-07-25 13:52:54.210782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.308 qpair failed and we were unable to recover it. 00:23:57.308 [2024-07-25 13:52:54.220674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.308 [2024-07-25 13:52:54.220757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.308 [2024-07-25 13:52:54.220782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.308 [2024-07-25 13:52:54.220797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.308 [2024-07-25 13:52:54.220809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.220839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.230678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.230765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.230790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.230805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.230817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.230847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.240687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.240786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.240810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.240825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.240837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.240872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.250722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.250814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.250840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.250855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.250867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.250897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.260734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.260827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.260852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.260866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.260879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.260909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.270774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.270866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.270891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.270906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.270919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.270948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.280795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.280931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.280955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.280971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.280984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.281013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.290814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.290910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.290940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.290956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.290969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.290998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.300832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.300917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.300942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.300956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.300969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.300999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.310886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.310975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.311000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.311015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.311027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.311057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.320961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.321077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.321103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.321118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.321131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.321161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.330958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.331045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.331076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.331093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.331110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.309 [2024-07-25 13:52:54.331140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.309 qpair failed and we were unable to recover it. 00:23:57.309 [2024-07-25 13:52:54.340984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.309 [2024-07-25 13:52:54.341080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.309 [2024-07-25 13:52:54.341105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.309 [2024-07-25 13:52:54.341119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.309 [2024-07-25 13:52:54.341132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.310 [2024-07-25 13:52:54.341162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.310 qpair failed and we were unable to recover it. 00:23:57.571 [2024-07-25 13:52:54.350979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.571 [2024-07-25 13:52:54.351072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.571 [2024-07-25 13:52:54.351097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.571 [2024-07-25 13:52:54.351112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.571 [2024-07-25 13:52:54.351125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.571 [2024-07-25 13:52:54.351154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.571 qpair failed and we were unable to recover it. 00:23:57.571 [2024-07-25 13:52:54.361023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.571 [2024-07-25 13:52:54.361120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.571 [2024-07-25 13:52:54.361144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.571 [2024-07-25 13:52:54.361158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.571 [2024-07-25 13:52:54.361171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.571 [2024-07-25 13:52:54.361201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.571 qpair failed and we were unable to recover it. 00:23:57.571 [2024-07-25 13:52:54.371052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.571 [2024-07-25 13:52:54.371152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.571 [2024-07-25 13:52:54.371177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.571 [2024-07-25 13:52:54.371191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.571 [2024-07-25 13:52:54.371204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.571 [2024-07-25 13:52:54.371233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.571 qpair failed and we were unable to recover it. 00:23:57.571 [2024-07-25 13:52:54.381107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.571 [2024-07-25 13:52:54.381225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.571 [2024-07-25 13:52:54.381250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.571 [2024-07-25 13:52:54.381264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.571 [2024-07-25 13:52:54.381277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.571 [2024-07-25 13:52:54.381307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.571 qpair failed and we were unable to recover it. 00:23:57.571 [2024-07-25 13:52:54.391148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.571 [2024-07-25 13:52:54.391263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.571 [2024-07-25 13:52:54.391287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.571 [2024-07-25 13:52:54.391302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.571 [2024-07-25 13:52:54.391315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.571 [2024-07-25 13:52:54.391344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.571 qpair failed and we were unable to recover it. 00:23:57.571 [2024-07-25 13:52:54.401145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.571 [2024-07-25 13:52:54.401235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.571 [2024-07-25 13:52:54.401259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.571 [2024-07-25 13:52:54.401273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.571 [2024-07-25 13:52:54.401286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.571 [2024-07-25 13:52:54.401315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.571 qpair failed and we were unable to recover it. 00:23:57.571 [2024-07-25 13:52:54.411162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.571 [2024-07-25 13:52:54.411253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.571 [2024-07-25 13:52:54.411277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.571 [2024-07-25 13:52:54.411292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.571 [2024-07-25 13:52:54.411304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.571 [2024-07-25 13:52:54.411333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.571 qpair failed and we were unable to recover it. 00:23:57.571 [2024-07-25 13:52:54.421184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.571 [2024-07-25 13:52:54.421310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.571 [2024-07-25 13:52:54.421334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.571 [2024-07-25 13:52:54.421355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.571 [2024-07-25 13:52:54.421369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.571 [2024-07-25 13:52:54.421398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.571 qpair failed and we were unable to recover it. 00:23:57.571 [2024-07-25 13:52:54.431233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.571 [2024-07-25 13:52:54.431315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.571 [2024-07-25 13:52:54.431340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.431355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.431367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.431396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.441386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.441521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.441546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.441561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.441574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.441604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.451273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.451389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.451414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.451429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.451442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.451470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.461408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.461535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.461576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.461592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.461605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.461650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.471344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.471430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.471456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.471471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.471484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.471514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.481403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.481496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.481520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.481535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.481548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.481577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.491414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.491502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.491527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.491542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.491554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.491583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.501439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.501522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.501548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.501562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.501576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.501605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.511468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.511551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.511575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.511595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.511608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.511637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.521579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.521713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.521738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.521753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.521766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.521795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.531595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.531696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.531724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.531739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.531752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.531781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.541575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.541689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.572 [2024-07-25 13:52:54.541715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.572 [2024-07-25 13:52:54.541730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.572 [2024-07-25 13:52:54.541742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.572 [2024-07-25 13:52:54.541771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.572 qpair failed and we were unable to recover it. 00:23:57.572 [2024-07-25 13:52:54.551581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.572 [2024-07-25 13:52:54.551710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.573 [2024-07-25 13:52:54.551735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.573 [2024-07-25 13:52:54.551749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.573 [2024-07-25 13:52:54.551762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.573 [2024-07-25 13:52:54.551790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.573 qpair failed and we were unable to recover it. 00:23:57.573 [2024-07-25 13:52:54.561613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.573 [2024-07-25 13:52:54.561718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.573 [2024-07-25 13:52:54.561743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.573 [2024-07-25 13:52:54.561757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.573 [2024-07-25 13:52:54.561770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.573 [2024-07-25 13:52:54.561799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.573 qpair failed and we were unable to recover it. 00:23:57.573 [2024-07-25 13:52:54.571638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.573 [2024-07-25 13:52:54.571725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.573 [2024-07-25 13:52:54.571750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.573 [2024-07-25 13:52:54.571765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.573 [2024-07-25 13:52:54.571778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.573 [2024-07-25 13:52:54.571808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.573 qpair failed and we were unable to recover it. 00:23:57.573 [2024-07-25 13:52:54.581677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.573 [2024-07-25 13:52:54.581772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.573 [2024-07-25 13:52:54.581796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.573 [2024-07-25 13:52:54.581811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.573 [2024-07-25 13:52:54.581823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.573 [2024-07-25 13:52:54.581853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.573 qpair failed and we were unable to recover it. 00:23:57.573 [2024-07-25 13:52:54.591684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.573 [2024-07-25 13:52:54.591769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.573 [2024-07-25 13:52:54.591794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.573 [2024-07-25 13:52:54.591809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.573 [2024-07-25 13:52:54.591821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.573 [2024-07-25 13:52:54.591850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.573 qpair failed and we were unable to recover it. 00:23:57.573 [2024-07-25 13:52:54.601760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.573 [2024-07-25 13:52:54.601874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.573 [2024-07-25 13:52:54.601904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.573 [2024-07-25 13:52:54.601920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.573 [2024-07-25 13:52:54.601933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.573 [2024-07-25 13:52:54.601963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.573 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.611723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.611826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.611852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.611868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.611881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.611910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.621820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.621933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.621961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.621978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.621992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.622023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.631817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.631903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.631928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.631943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.631956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.631986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.641917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.642008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.642034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.642049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.642084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.642150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.651846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.651944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.651969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.651984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.651997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.652027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.662009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.662093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.662118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.662133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.662146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.662175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.671907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.671999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.672024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.672040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.672052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.672103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.682082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.682201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.682225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.682240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.682254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.682283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.691993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.692092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.692122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.692138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.692151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.692181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.702056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.702151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.702177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.835 [2024-07-25 13:52:54.702191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.835 [2024-07-25 13:52:54.702204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.835 [2024-07-25 13:52:54.702233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.835 qpair failed and we were unable to recover it. 00:23:57.835 [2024-07-25 13:52:54.712021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.835 [2024-07-25 13:52:54.712115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.835 [2024-07-25 13:52:54.712140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.712155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.712168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.712197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.722051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.722153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.722177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.722191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.722204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.722234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.732118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.732218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.732246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.732262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.732280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.732310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.742131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.742255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.742281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.742295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.742308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.742338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.752199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.752284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.752310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.752325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.752337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.752382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.762213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.762358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.762387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.762419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.762431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.762475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.772186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.772282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.772307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.772322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.772334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.772364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.782263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.782353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.782378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.782393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.782406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.782435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.792231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.792335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.792360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.792375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.792389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.792418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.802299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.802388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.802413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.802428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.802440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.802470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.812360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.812447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.812473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.812488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.812500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.812529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.822369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.836 [2024-07-25 13:52:54.822458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.836 [2024-07-25 13:52:54.822484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.836 [2024-07-25 13:52:54.822504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.836 [2024-07-25 13:52:54.822522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.836 [2024-07-25 13:52:54.822553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.836 qpair failed and we were unable to recover it. 00:23:57.836 [2024-07-25 13:52:54.832335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.837 [2024-07-25 13:52:54.832414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.837 [2024-07-25 13:52:54.832440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.837 [2024-07-25 13:52:54.832455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.837 [2024-07-25 13:52:54.832468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.837 [2024-07-25 13:52:54.832498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.837 qpair failed and we were unable to recover it. 00:23:57.837 [2024-07-25 13:52:54.842406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.837 [2024-07-25 13:52:54.842505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.837 [2024-07-25 13:52:54.842530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.837 [2024-07-25 13:52:54.842545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.837 [2024-07-25 13:52:54.842558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.837 [2024-07-25 13:52:54.842587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.837 qpair failed and we were unable to recover it. 00:23:57.837 [2024-07-25 13:52:54.852424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.837 [2024-07-25 13:52:54.852544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.837 [2024-07-25 13:52:54.852568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.837 [2024-07-25 13:52:54.852583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.837 [2024-07-25 13:52:54.852595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.837 [2024-07-25 13:52:54.852625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.837 qpair failed and we were unable to recover it. 00:23:57.837 [2024-07-25 13:52:54.862559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:57.837 [2024-07-25 13:52:54.862644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:57.837 [2024-07-25 13:52:54.862670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:57.837 [2024-07-25 13:52:54.862689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:57.837 [2024-07-25 13:52:54.862702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:57.837 [2024-07-25 13:52:54.862732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.837 qpair failed and we were unable to recover it. 00:23:58.098 [2024-07-25 13:52:54.872488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.098 [2024-07-25 13:52:54.872574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.098 [2024-07-25 13:52:54.872602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.098 [2024-07-25 13:52:54.872618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.098 [2024-07-25 13:52:54.872631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.098 [2024-07-25 13:52:54.872660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.098 qpair failed and we were unable to recover it. 00:23:58.098 [2024-07-25 13:52:54.882518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.098 [2024-07-25 13:52:54.882609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.098 [2024-07-25 13:52:54.882634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.098 [2024-07-25 13:52:54.882648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.098 [2024-07-25 13:52:54.882661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.098 [2024-07-25 13:52:54.882691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.098 qpair failed and we were unable to recover it. 00:23:58.098 [2024-07-25 13:52:54.892525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.098 [2024-07-25 13:52:54.892611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.098 [2024-07-25 13:52:54.892636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.098 [2024-07-25 13:52:54.892650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.098 [2024-07-25 13:52:54.892663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.098 [2024-07-25 13:52:54.892692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.098 qpair failed and we were unable to recover it. 00:23:58.098 [2024-07-25 13:52:54.902590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.098 [2024-07-25 13:52:54.902719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.098 [2024-07-25 13:52:54.902744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.098 [2024-07-25 13:52:54.902759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.098 [2024-07-25 13:52:54.902772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.098 [2024-07-25 13:52:54.902801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.098 qpair failed and we were unable to recover it. 00:23:58.098 [2024-07-25 13:52:54.912611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.098 [2024-07-25 13:52:54.912727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.098 [2024-07-25 13:52:54.912752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.098 [2024-07-25 13:52:54.912772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.098 [2024-07-25 13:52:54.912785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.098 [2024-07-25 13:52:54.912816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.098 qpair failed and we were unable to recover it. 00:23:58.098 [2024-07-25 13:52:54.922619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.098 [2024-07-25 13:52:54.922709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.098 [2024-07-25 13:52:54.922734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.098 [2024-07-25 13:52:54.922749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.098 [2024-07-25 13:52:54.922762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.098 [2024-07-25 13:52:54.922791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.098 qpair failed and we were unable to recover it. 00:23:58.098 [2024-07-25 13:52:54.932641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.098 [2024-07-25 13:52:54.932729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.098 [2024-07-25 13:52:54.932753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.098 [2024-07-25 13:52:54.932768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.098 [2024-07-25 13:52:54.932781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.098 [2024-07-25 13:52:54.932811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.098 qpair failed and we were unable to recover it. 00:23:58.098 [2024-07-25 13:52:54.942699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.098 [2024-07-25 13:52:54.942792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.098 [2024-07-25 13:52:54.942816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.098 [2024-07-25 13:52:54.942832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.098 [2024-07-25 13:52:54.942845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.098 [2024-07-25 13:52:54.942873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.098 qpair failed and we were unable to recover it. 00:23:58.098 [2024-07-25 13:52:54.952701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:54.952785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:54.952810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:54.952825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:54.952837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:54.952867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:54.962736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:54.962830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:54.962854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:54.962869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:54.962882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:54.962911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:54.972747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:54.972845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:54.972870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:54.972884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:54.972896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:54.972926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:54.982791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:54.982876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:54.982904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:54.982920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:54.982933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:54.982962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:54.992825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:54.992912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:54.992937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:54.992952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:54.992965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:54.992994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:55.002866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:55.002958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:55.002989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:55.003005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:55.003018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:55.003048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:55.012879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:55.012961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:55.012986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:55.013000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:55.013014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:55.013044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:55.022936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:55.023030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:55.023054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:55.023077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:55.023091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:55.023121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:55.032913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:55.032995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:55.033019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:55.033034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:55.033047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:55.033086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:55.042983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:55.043084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:55.043112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:55.043129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:55.043142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:55.043178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:55.053015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:55.053117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:55.053143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:55.053157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:55.053170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:55.053200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:55.063026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.099 [2024-07-25 13:52:55.063129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.099 [2024-07-25 13:52:55.063154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.099 [2024-07-25 13:52:55.063169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.099 [2024-07-25 13:52:55.063181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.099 [2024-07-25 13:52:55.063211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.099 qpair failed and we were unable to recover it. 00:23:58.099 [2024-07-25 13:52:55.073115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.100 [2024-07-25 13:52:55.073216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.100 [2024-07-25 13:52:55.073244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.100 [2024-07-25 13:52:55.073259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.100 [2024-07-25 13:52:55.073271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.100 [2024-07-25 13:52:55.073312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.100 qpair failed and we were unable to recover it. 00:23:58.100 [2024-07-25 13:52:55.083109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.100 [2024-07-25 13:52:55.083203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.100 [2024-07-25 13:52:55.083228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.100 [2024-07-25 13:52:55.083243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.100 [2024-07-25 13:52:55.083256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.100 [2024-07-25 13:52:55.083286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.100 qpair failed and we were unable to recover it. 00:23:58.100 [2024-07-25 13:52:55.093110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.100 [2024-07-25 13:52:55.093195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.100 [2024-07-25 13:52:55.093227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.100 [2024-07-25 13:52:55.093243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.100 [2024-07-25 13:52:55.093256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.100 [2024-07-25 13:52:55.093286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.100 qpair failed and we were unable to recover it. 00:23:58.100 [2024-07-25 13:52:55.103115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.100 [2024-07-25 13:52:55.103197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.100 [2024-07-25 13:52:55.103223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.100 [2024-07-25 13:52:55.103238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.100 [2024-07-25 13:52:55.103250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.100 [2024-07-25 13:52:55.103280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.100 qpair failed and we were unable to recover it. 00:23:58.100 [2024-07-25 13:52:55.113148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.100 [2024-07-25 13:52:55.113262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.100 [2024-07-25 13:52:55.113289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.100 [2024-07-25 13:52:55.113304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.100 [2024-07-25 13:52:55.113320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.100 [2024-07-25 13:52:55.113349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.100 qpair failed and we were unable to recover it. 00:23:58.100 [2024-07-25 13:52:55.123337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.100 [2024-07-25 13:52:55.123427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.100 [2024-07-25 13:52:55.123452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.100 [2024-07-25 13:52:55.123467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.100 [2024-07-25 13:52:55.123480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.100 [2024-07-25 13:52:55.123518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.100 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.133220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.133313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.133338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.133353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.133370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.133400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.143237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.143316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.143344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.143360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.143372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.143401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.153270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.153377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.153404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.153419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.153431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.153460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.163335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.163429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.163457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.163474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.163486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.163518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.173333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.173463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.173490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.173505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.173518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.173548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.183378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.183469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.183495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.183509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.183522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.183551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.193466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.193570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.193597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.193613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.193626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.193656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.203393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.203482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.203507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.203521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.203534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.203564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.213480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.213584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.213609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.213623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.213636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.213665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.223476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.223563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.223588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.223603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.223621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.223651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.233476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.233566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.233590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.233604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.360 [2024-07-25 13:52:55.233617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.360 [2024-07-25 13:52:55.233646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.360 qpair failed and we were unable to recover it. 00:23:58.360 [2024-07-25 13:52:55.243549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.360 [2024-07-25 13:52:55.243643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.360 [2024-07-25 13:52:55.243668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.360 [2024-07-25 13:52:55.243683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.243696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.243726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.253577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.253692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.253719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.253735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.253747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.253776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.263562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.263649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.263674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.263688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.263701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.263730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.273611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.273727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.273754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.273769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.273782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.273811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.283640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.283730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.283755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.283770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.283783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.283812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.293670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.293763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.293787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.293802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.293815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.293844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.303683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.303804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.303831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.303846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.303859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.303888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.313701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.313785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.313810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.313830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.313844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.313872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.323764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.323887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.323913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.323928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.323941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.323971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.333809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.333901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.333925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.333939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.333952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.333982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.343833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.343948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.343974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.343989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.344001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.344031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.353938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.354078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.354105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.354120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.354133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.361 [2024-07-25 13:52:55.354163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.361 qpair failed and we were unable to recover it. 00:23:58.361 [2024-07-25 13:52:55.363886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.361 [2024-07-25 13:52:55.363979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.361 [2024-07-25 13:52:55.364005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.361 [2024-07-25 13:52:55.364021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.361 [2024-07-25 13:52:55.364034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.362 [2024-07-25 13:52:55.364071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.362 qpair failed and we were unable to recover it. 00:23:58.362 [2024-07-25 13:52:55.373969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.362 [2024-07-25 13:52:55.374067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.362 [2024-07-25 13:52:55.374092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.362 [2024-07-25 13:52:55.374107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.362 [2024-07-25 13:52:55.374119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.362 [2024-07-25 13:52:55.374149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.362 qpair failed and we were unable to recover it. 00:23:58.362 [2024-07-25 13:52:55.383959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.362 [2024-07-25 13:52:55.384054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.362 [2024-07-25 13:52:55.384086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.362 [2024-07-25 13:52:55.384102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.362 [2024-07-25 13:52:55.384114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.362 [2024-07-25 13:52:55.384144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.362 qpair failed and we were unable to recover it. 00:23:58.362 [2024-07-25 13:52:55.393963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.394052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.394085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.394101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.394113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.394144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.403977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.404074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.404107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.404123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.404136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.404166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.414024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.414115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.414140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.414155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.414167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.414196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.424027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.424121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.424146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.424161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.424174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.424203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.434045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.434141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.434165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.434180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.434193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.434222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.444144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.444237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.444263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.444282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.444295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.444343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.454142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.454276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.454302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.454317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.454330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.454359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.464249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.464332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.464358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.464373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.464386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.464415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.474170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.474266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.474293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.474308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.474320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.474350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.484194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.484285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.484311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.484325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.484338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.484368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.494223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.494310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.494340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.494355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.494368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.494398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.504263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.504354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.504379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.504394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.504406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.504436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.514286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.514370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.514395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.514409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.514421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.514450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.524368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.524461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.524486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.524500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.524513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.524542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.534376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.534477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.534502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.534517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.534530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.534564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.544366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.629 [2024-07-25 13:52:55.544459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.629 [2024-07-25 13:52:55.544483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.629 [2024-07-25 13:52:55.544498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.629 [2024-07-25 13:52:55.544511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.629 [2024-07-25 13:52:55.544541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.629 qpair failed and we were unable to recover it. 00:23:58.629 [2024-07-25 13:52:55.554380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.554463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.554488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.554504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.554517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.554546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.564488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.564624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.564650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.564665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.564678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.564706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.574469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.574553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.574577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.574590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.574603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.574633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.584503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.584592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.584616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.584631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.584644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.584673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.594534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.594648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.594674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.594690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.594703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.594733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.604560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.604651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.604675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.604689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.604702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.604731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.614662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.614788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.614828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.614842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.614855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.614897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.624657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.624747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.624771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.624785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.624803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.624833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.634621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.634709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.634734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.634748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.634761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.634801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.644700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.644794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.644818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.644833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.644846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.644875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.630 [2024-07-25 13:52:55.654760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.630 [2024-07-25 13:52:55.654844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.630 [2024-07-25 13:52:55.654868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.630 [2024-07-25 13:52:55.654882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.630 [2024-07-25 13:52:55.654895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.630 [2024-07-25 13:52:55.654924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.630 qpair failed and we were unable to recover it. 00:23:58.888 [2024-07-25 13:52:55.664695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.888 [2024-07-25 13:52:55.664781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.888 [2024-07-25 13:52:55.664805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.888 [2024-07-25 13:52:55.664820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.888 [2024-07-25 13:52:55.664833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.888 [2024-07-25 13:52:55.664862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.888 qpair failed and we were unable to recover it. 00:23:58.888 [2024-07-25 13:52:55.674749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.888 [2024-07-25 13:52:55.674827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.888 [2024-07-25 13:52:55.674852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.888 [2024-07-25 13:52:55.674867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.888 [2024-07-25 13:52:55.674879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.888 [2024-07-25 13:52:55.674908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.888 qpair failed and we were unable to recover it. 00:23:58.888 [2024-07-25 13:52:55.684861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.888 [2024-07-25 13:52:55.684950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.888 [2024-07-25 13:52:55.684974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.888 [2024-07-25 13:52:55.684989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.888 [2024-07-25 13:52:55.685002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.888 [2024-07-25 13:52:55.685043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.888 qpair failed and we were unable to recover it. 00:23:58.888 [2024-07-25 13:52:55.694783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.888 [2024-07-25 13:52:55.694903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.888 [2024-07-25 13:52:55.694929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.888 [2024-07-25 13:52:55.694944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.694957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.694986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.704896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.704986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.705010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.705025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.705038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.705086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.714855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.714937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.714962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.714982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.714996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.715025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.724897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.724984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.725008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.725023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.725036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.725072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.734891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.735002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.735029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.735045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.735057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.735096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.744956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.745044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.745075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.745091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.745104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.745133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.754944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.755057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.755090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.755105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.755117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.755147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.765033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.765134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.765158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.765173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.765187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.765216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.774994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.775081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.775105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.775120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.775133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.775162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.785029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.785121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.785145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.785160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.785172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.785202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.795082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.795167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.795193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.795208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.795221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.795249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.805123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.805259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.805286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.805307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.889 [2024-07-25 13:52:55.805321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.889 [2024-07-25 13:52:55.805351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.889 qpair failed and we were unable to recover it. 00:23:58.889 [2024-07-25 13:52:55.815108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.889 [2024-07-25 13:52:55.815197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.889 [2024-07-25 13:52:55.815221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.889 [2024-07-25 13:52:55.815236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.815249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.815278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.825201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.825295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.825319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.825334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.825346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.825376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.835235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.835329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.835353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.835368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.835380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.835409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.845219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.845358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.845382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.845397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.845409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.845453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.855332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.855450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.855491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.855506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.855518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.855573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.865258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.865368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.865394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.865409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.865422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.865452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.875310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.875430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.875456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.875472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.875484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.875513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.885332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.885421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.885446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.885461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.885474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.885503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.895351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.895435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.895464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.895479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.895492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.895522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.905458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.905546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.905570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.905599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.905613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.905641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:58.890 [2024-07-25 13:52:55.915408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:58.890 [2024-07-25 13:52:55.915494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:58.890 [2024-07-25 13:52:55.915519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:58.890 [2024-07-25 13:52:55.915534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:58.890 [2024-07-25 13:52:55.915547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:58.890 [2024-07-25 13:52:55.915576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.890 qpair failed and we were unable to recover it. 00:23:59.150 [2024-07-25 13:52:55.925529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.150 [2024-07-25 13:52:55.925621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.150 [2024-07-25 13:52:55.925645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.150 [2024-07-25 13:52:55.925660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.150 [2024-07-25 13:52:55.925673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.150 [2024-07-25 13:52:55.925702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.150 qpair failed and we were unable to recover it. 00:23:59.150 [2024-07-25 13:52:55.935449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.150 [2024-07-25 13:52:55.935543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.150 [2024-07-25 13:52:55.935571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.150 [2024-07-25 13:52:55.935587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.150 [2024-07-25 13:52:55.935599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.150 [2024-07-25 13:52:55.935634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.150 qpair failed and we were unable to recover it. 00:23:59.150 [2024-07-25 13:52:55.945489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.150 [2024-07-25 13:52:55.945578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.150 [2024-07-25 13:52:55.945603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.150 [2024-07-25 13:52:55.945618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.150 [2024-07-25 13:52:55.945630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.150 [2024-07-25 13:52:55.945660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.150 qpair failed and we were unable to recover it. 00:23:59.150 [2024-07-25 13:52:55.955609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.150 [2024-07-25 13:52:55.955726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.150 [2024-07-25 13:52:55.955770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.150 [2024-07-25 13:52:55.955787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.150 [2024-07-25 13:52:55.955799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.150 [2024-07-25 13:52:55.955844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.150 qpair failed and we were unable to recover it. 00:23:59.150 [2024-07-25 13:52:55.965577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.150 [2024-07-25 13:52:55.965666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.150 [2024-07-25 13:52:55.965692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.150 [2024-07-25 13:52:55.965707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.150 [2024-07-25 13:52:55.965720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.150 [2024-07-25 13:52:55.965750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.150 qpair failed and we were unable to recover it. 00:23:59.150 [2024-07-25 13:52:55.975553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.150 [2024-07-25 13:52:55.975674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.150 [2024-07-25 13:52:55.975701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.150 [2024-07-25 13:52:55.975717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.150 [2024-07-25 13:52:55.975730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.150 [2024-07-25 13:52:55.975771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.150 qpair failed and we were unable to recover it. 00:23:59.150 [2024-07-25 13:52:55.985581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.150 [2024-07-25 13:52:55.985668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.150 [2024-07-25 13:52:55.985698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.150 [2024-07-25 13:52:55.985713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.150 [2024-07-25 13:52:55.985726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:55.985756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:55.995668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:55.995787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:55.995813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:55.995829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:55.995842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:55.995872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.005667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.005758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.005782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.005797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.005810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.005840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.015766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.015896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.015937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.015952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.015964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.016020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.025704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.025793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.025817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.025831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.025850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.025880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.035721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.035808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.035832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.035847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.035860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.035889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.045862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.045952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.045976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.045990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.046003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.046048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.055944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.056043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.056074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.056089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.056102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.056132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.065856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.065946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.065970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.065986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.065999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.066028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.075873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.075968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.075993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.076008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.076020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.076050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.085935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.086079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.086105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.086121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.086133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.086163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.095882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.095971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.095996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.096011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.096024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.096052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.151 qpair failed and we were unable to recover it. 00:23:59.151 [2024-07-25 13:52:56.105936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.151 [2024-07-25 13:52:56.106021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.151 [2024-07-25 13:52:56.106046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.151 [2024-07-25 13:52:56.106067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.151 [2024-07-25 13:52:56.106082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.151 [2024-07-25 13:52:56.106125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.152 qpair failed and we were unable to recover it. 00:23:59.152 [2024-07-25 13:52:56.115961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.152 [2024-07-25 13:52:56.116084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.152 [2024-07-25 13:52:56.116111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.152 [2024-07-25 13:52:56.116132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.152 [2024-07-25 13:52:56.116146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.152 [2024-07-25 13:52:56.116175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.152 qpair failed and we were unable to recover it. 00:23:59.152 [2024-07-25 13:52:56.125993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.152 [2024-07-25 13:52:56.126099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.152 [2024-07-25 13:52:56.126124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.152 [2024-07-25 13:52:56.126139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.152 [2024-07-25 13:52:56.126152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.152 [2024-07-25 13:52:56.126182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.152 qpair failed and we were unable to recover it. 00:23:59.152 [2024-07-25 13:52:56.136078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.152 [2024-07-25 13:52:56.136172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.152 [2024-07-25 13:52:56.136197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.152 [2024-07-25 13:52:56.136211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.152 [2024-07-25 13:52:56.136224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.152 [2024-07-25 13:52:56.136253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.152 qpair failed and we were unable to recover it. 00:23:59.152 [2024-07-25 13:52:56.146053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.152 [2024-07-25 13:52:56.146161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.152 [2024-07-25 13:52:56.146189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.152 [2024-07-25 13:52:56.146204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.152 [2024-07-25 13:52:56.146217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.152 [2024-07-25 13:52:56.146246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.152 qpair failed and we were unable to recover it. 00:23:59.152 [2024-07-25 13:52:56.156126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.152 [2024-07-25 13:52:56.156209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.152 [2024-07-25 13:52:56.156234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.152 [2024-07-25 13:52:56.156248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.152 [2024-07-25 13:52:56.156261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.152 [2024-07-25 13:52:56.156291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.152 qpair failed and we were unable to recover it. 00:23:59.152 [2024-07-25 13:52:56.166099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.152 [2024-07-25 13:52:56.166192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.152 [2024-07-25 13:52:56.166217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.152 [2024-07-25 13:52:56.166231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.152 [2024-07-25 13:52:56.166244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.152 [2024-07-25 13:52:56.166273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.152 qpair failed and we were unable to recover it. 00:23:59.152 [2024-07-25 13:52:56.176141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.152 [2024-07-25 13:52:56.176261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.152 [2024-07-25 13:52:56.176288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.152 [2024-07-25 13:52:56.176314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.152 [2024-07-25 13:52:56.176327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.152 [2024-07-25 13:52:56.176356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.152 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.186173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.186263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.186290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.186305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.186318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.186348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.196198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.196319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.196345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.196360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.196373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.196402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.206223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.206344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.206370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.206391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.206405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.206435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.216367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.216503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.216533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.216550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.216563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.216609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.226370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.226460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.226484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.226499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.226511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.226541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.236329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.236413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.236437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.236452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.236465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.236494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.246456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.246585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.246625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.246640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.246659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.246715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.256396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.256490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.256515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.256529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.256542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.256572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.266433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.266536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.266561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.266576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.266589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.266629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.276448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.276531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.276556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.276571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.276583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.276613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.286484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.286582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.286606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.286621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.414 [2024-07-25 13:52:56.286634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.414 [2024-07-25 13:52:56.286664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.414 qpair failed and we were unable to recover it. 00:23:59.414 [2024-07-25 13:52:56.296493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.414 [2024-07-25 13:52:56.296584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.414 [2024-07-25 13:52:56.296614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.414 [2024-07-25 13:52:56.296630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.296643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.296672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.306523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.306625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.306649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.306664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.306677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.306706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.316588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.316692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.316720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.316736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.316749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.316780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.326676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.326806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.326831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.326846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.326858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.326888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.336616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.336702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.336728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.336743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.336756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.336790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.346675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.346778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.346803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.346817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.346830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.346860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.356684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.356806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.356831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.356845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.356858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.356887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.366689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.366779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.366804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.366819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.366832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.366861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.376717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.376810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.376838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.376855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.376868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.376899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.386718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.386850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.386880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.386896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.386908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.386938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.396770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.396862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.396888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.396903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.396915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.396945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.406843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.406979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.407004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.415 [2024-07-25 13:52:56.407020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.415 [2024-07-25 13:52:56.407032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.415 [2024-07-25 13:52:56.407069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.415 qpair failed and we were unable to recover it. 00:23:59.415 [2024-07-25 13:52:56.416816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.415 [2024-07-25 13:52:56.416913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.415 [2024-07-25 13:52:56.416942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.416 [2024-07-25 13:52:56.416958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.416 [2024-07-25 13:52:56.416971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.416 [2024-07-25 13:52:56.417002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.416 qpair failed and we were unable to recover it. 00:23:59.416 [2024-07-25 13:52:56.426838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.416 [2024-07-25 13:52:56.426925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.416 [2024-07-25 13:52:56.426950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.416 [2024-07-25 13:52:56.426966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.416 [2024-07-25 13:52:56.426983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.416 [2024-07-25 13:52:56.427013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.416 qpair failed and we were unable to recover it. 00:23:59.416 [2024-07-25 13:52:56.436858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.416 [2024-07-25 13:52:56.436986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.416 [2024-07-25 13:52:56.437011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.416 [2024-07-25 13:52:56.437026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.416 [2024-07-25 13:52:56.437039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.416 [2024-07-25 13:52:56.437075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.416 qpair failed and we were unable to recover it. 00:23:59.416 [2024-07-25 13:52:56.446957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.416 [2024-07-25 13:52:56.447055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.416 [2024-07-25 13:52:56.447086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.416 [2024-07-25 13:52:56.447102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.416 [2024-07-25 13:52:56.447115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.416 [2024-07-25 13:52:56.447145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.416 qpair failed and we were unable to recover it. 00:23:59.677 [2024-07-25 13:52:56.456981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.677 [2024-07-25 13:52:56.457078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.677 [2024-07-25 13:52:56.457104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.677 [2024-07-25 13:52:56.457119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.677 [2024-07-25 13:52:56.457133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.677 [2024-07-25 13:52:56.457163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.677 qpair failed and we were unable to recover it. 00:23:59.677 [2024-07-25 13:52:56.466995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.677 [2024-07-25 13:52:56.467092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.677 [2024-07-25 13:52:56.467118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.677 [2024-07-25 13:52:56.467133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.677 [2024-07-25 13:52:56.467146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.677 [2024-07-25 13:52:56.467175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.677 qpair failed and we were unable to recover it. 00:23:59.677 [2024-07-25 13:52:56.476992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.677 [2024-07-25 13:52:56.477083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.677 [2024-07-25 13:52:56.477108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.677 [2024-07-25 13:52:56.477123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.677 [2024-07-25 13:52:56.477136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.677 [2024-07-25 13:52:56.477165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.677 qpair failed and we were unable to recover it. 00:23:59.677 [2024-07-25 13:52:56.487069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.677 [2024-07-25 13:52:56.487161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.677 [2024-07-25 13:52:56.487187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.677 [2024-07-25 13:52:56.487202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.677 [2024-07-25 13:52:56.487214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.677 [2024-07-25 13:52:56.487245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.677 qpair failed and we were unable to recover it. 00:23:59.677 [2024-07-25 13:52:56.497133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.497231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.497256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.497271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.497284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.497315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.507087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.507178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.507202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.507218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.507231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.507260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.517095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.517184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.517210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.517224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.517242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.517273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.527230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.527318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.527343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.527358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.527371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.527401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.537207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.537329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.537357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.537375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.537387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.537417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.547259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.547349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.547375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.547391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.547404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.547434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.557243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.557358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.557383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.557398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.557411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.557441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.567277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.567369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.567394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.567408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.567421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.567450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.577290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.577395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.577420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.577435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.577448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.577490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.587331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.587417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.587443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.587457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.587470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.587500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.597333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.597419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.597444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.597459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.597472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.597501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.678 [2024-07-25 13:52:56.607360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.678 [2024-07-25 13:52:56.607452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.678 [2024-07-25 13:52:56.607479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.678 [2024-07-25 13:52:56.607500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.678 [2024-07-25 13:52:56.607514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.678 [2024-07-25 13:52:56.607544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.678 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.617386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.617488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.617512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.617527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.617540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.617570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.627434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.627518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.627544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.627559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.627572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.627602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.637496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.637592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.637617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.637632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.637646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.637675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.647520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.647612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.647637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.647652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.647665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.647693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.657503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.657586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.657611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.657627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.657639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.657668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.667532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.667616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.667642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.667656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.667669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.667709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.677581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.677706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.677730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.677745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.677759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.677788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.687725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.687876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.687900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.687915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.687927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.687970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.697606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.697700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.697729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.697745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.697758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.697788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.679 [2024-07-25 13:52:56.707655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.679 [2024-07-25 13:52:56.707742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.679 [2024-07-25 13:52:56.707767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.679 [2024-07-25 13:52:56.707782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.679 [2024-07-25 13:52:56.707794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.679 [2024-07-25 13:52:56.707824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.679 qpair failed and we were unable to recover it. 00:23:59.941 [2024-07-25 13:52:56.717647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.941 [2024-07-25 13:52:56.717736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.941 [2024-07-25 13:52:56.717762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.941 [2024-07-25 13:52:56.717777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.941 [2024-07-25 13:52:56.717791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.941 [2024-07-25 13:52:56.717820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.941 qpair failed and we were unable to recover it. 00:23:59.941 [2024-07-25 13:52:56.727696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.941 [2024-07-25 13:52:56.727785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.941 [2024-07-25 13:52:56.727810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.941 [2024-07-25 13:52:56.727825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.941 [2024-07-25 13:52:56.727838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.941 [2024-07-25 13:52:56.727867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.941 qpair failed and we were unable to recover it. 00:23:59.941 [2024-07-25 13:52:56.737813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.941 [2024-07-25 13:52:56.737902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.941 [2024-07-25 13:52:56.737927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.941 [2024-07-25 13:52:56.737942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.941 [2024-07-25 13:52:56.737955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.941 [2024-07-25 13:52:56.737990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.941 qpair failed and we were unable to recover it. 00:23:59.941 [2024-07-25 13:52:56.747737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.941 [2024-07-25 13:52:56.747820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.941 [2024-07-25 13:52:56.747845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.941 [2024-07-25 13:52:56.747859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.941 [2024-07-25 13:52:56.747872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.941 [2024-07-25 13:52:56.747902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.941 qpair failed and we were unable to recover it. 00:23:59.941 [2024-07-25 13:52:56.757814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.941 [2024-07-25 13:52:56.757925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.941 [2024-07-25 13:52:56.757950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.941 [2024-07-25 13:52:56.757965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.941 [2024-07-25 13:52:56.757978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.941 [2024-07-25 13:52:56.758008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.941 qpair failed and we were unable to recover it. 00:23:59.941 [2024-07-25 13:52:56.767826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.941 [2024-07-25 13:52:56.767916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.941 [2024-07-25 13:52:56.767943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.941 [2024-07-25 13:52:56.767959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.941 [2024-07-25 13:52:56.767972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.941 [2024-07-25 13:52:56.768001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.941 qpair failed and we were unable to recover it. 00:23:59.941 [2024-07-25 13:52:56.777977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.941 [2024-07-25 13:52:56.778109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.941 [2024-07-25 13:52:56.778135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.941 [2024-07-25 13:52:56.778150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.941 [2024-07-25 13:52:56.778163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.941 [2024-07-25 13:52:56.778193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.941 qpair failed and we were unable to recover it. 00:23:59.941 [2024-07-25 13:52:56.787881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.787969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.788003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.788019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.788031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.788069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.797966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.798077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.798104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.798119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.798133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.798175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.807959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.808052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.808084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.808099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.808112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.808154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.817964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.818050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.818085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.818101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.818114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.818145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.828013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.828120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.828146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.828162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.828180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.828210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.838014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.838140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.838165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.838180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.838192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.838222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.848068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.848186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.848221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.848236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.848248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.848278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.858119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.858223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.858251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.858268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.858281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.858313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.868107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.868187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.868213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.868228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.868241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.868283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.878150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.878245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.878271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.878287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.878300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.878329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.942 [2024-07-25 13:52:56.888170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.942 [2024-07-25 13:52:56.888258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.942 [2024-07-25 13:52:56.888283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.942 [2024-07-25 13:52:56.888298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.942 [2024-07-25 13:52:56.888311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.942 [2024-07-25 13:52:56.888340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.942 qpair failed and we were unable to recover it. 00:23:59.943 [2024-07-25 13:52:56.898185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.943 [2024-07-25 13:52:56.898317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.943 [2024-07-25 13:52:56.898342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.943 [2024-07-25 13:52:56.898356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.943 [2024-07-25 13:52:56.898369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.943 [2024-07-25 13:52:56.898399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.943 qpair failed and we were unable to recover it. 00:23:59.943 [2024-07-25 13:52:56.908225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.943 [2024-07-25 13:52:56.908314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.943 [2024-07-25 13:52:56.908340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.943 [2024-07-25 13:52:56.908354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.943 [2024-07-25 13:52:56.908366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.943 [2024-07-25 13:52:56.908396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.943 qpair failed and we were unable to recover it. 00:23:59.943 [2024-07-25 13:52:56.918295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.943 [2024-07-25 13:52:56.918398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.943 [2024-07-25 13:52:56.918423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.943 [2024-07-25 13:52:56.918438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.943 [2024-07-25 13:52:56.918456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.943 [2024-07-25 13:52:56.918498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.943 qpair failed and we were unable to recover it. 00:23:59.943 [2024-07-25 13:52:56.928272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.943 [2024-07-25 13:52:56.928360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.943 [2024-07-25 13:52:56.928385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.943 [2024-07-25 13:52:56.928400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.943 [2024-07-25 13:52:56.928413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.943 [2024-07-25 13:52:56.928443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.943 qpair failed and we were unable to recover it. 00:23:59.943 [2024-07-25 13:52:56.938299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.943 [2024-07-25 13:52:56.938386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.943 [2024-07-25 13:52:56.938411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.943 [2024-07-25 13:52:56.938425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.943 [2024-07-25 13:52:56.938438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.943 [2024-07-25 13:52:56.938467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.943 qpair failed and we were unable to recover it. 00:23:59.943 [2024-07-25 13:52:56.948333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.943 [2024-07-25 13:52:56.948422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.943 [2024-07-25 13:52:56.948447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.943 [2024-07-25 13:52:56.948462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.943 [2024-07-25 13:52:56.948474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.943 [2024-07-25 13:52:56.948503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.943 qpair failed and we were unable to recover it. 00:23:59.943 [2024-07-25 13:52:56.958372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.943 [2024-07-25 13:52:56.958494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.943 [2024-07-25 13:52:56.958518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.943 [2024-07-25 13:52:56.958532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.943 [2024-07-25 13:52:56.958546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.943 [2024-07-25 13:52:56.958576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.943 qpair failed and we were unable to recover it. 00:23:59.943 [2024-07-25 13:52:56.968506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:23:59.943 [2024-07-25 13:52:56.968599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:23:59.943 [2024-07-25 13:52:56.968625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:23:59.943 [2024-07-25 13:52:56.968642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:23:59.943 [2024-07-25 13:52:56.968657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:23:59.943 [2024-07-25 13:52:56.968688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:59.943 qpair failed and we were unable to recover it. 00:24:00.203 [2024-07-25 13:52:56.978447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.203 [2024-07-25 13:52:56.978534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.203 [2024-07-25 13:52:56.978560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.203 [2024-07-25 13:52:56.978575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.203 [2024-07-25 13:52:56.978588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.203 [2024-07-25 13:52:56.978617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.203 qpair failed and we were unable to recover it. 00:24:00.203 [2024-07-25 13:52:56.988556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.203 [2024-07-25 13:52:56.988647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.203 [2024-07-25 13:52:56.988675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.203 [2024-07-25 13:52:56.988691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.203 [2024-07-25 13:52:56.988704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.203 [2024-07-25 13:52:56.988749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.203 qpair failed and we were unable to recover it. 00:24:00.203 [2024-07-25 13:52:56.998504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.203 [2024-07-25 13:52:56.998591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.203 [2024-07-25 13:52:56.998616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.203 [2024-07-25 13:52:56.998631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.203 [2024-07-25 13:52:56.998644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.203 [2024-07-25 13:52:56.998673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.203 qpair failed and we were unable to recover it. 00:24:00.203 [2024-07-25 13:52:57.008517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.203 [2024-07-25 13:52:57.008607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.203 [2024-07-25 13:52:57.008633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.008653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.008667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.008696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.018671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.018819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.018843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.018858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.018870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.018899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.028645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.028747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.028776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.028791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.028804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.028833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.038597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.038683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.038708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.038723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.038736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.038765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.048626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.048715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.048739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.048754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.048767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.048797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.058680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.058810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.058836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.058851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.058864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.058893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.068669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.068763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.068787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.068802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.068814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.068843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.078739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.078855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.078880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.078895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.078909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.078952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.088865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.088993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.089017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.089032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.089045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.089081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.098812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.098935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.098964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.098980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.098993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.099023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.108813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.108901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.108926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.108940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.108953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.108982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.118854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.118972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.119002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.204 [2024-07-25 13:52:57.119018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.204 [2024-07-25 13:52:57.119031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.204 [2024-07-25 13:52:57.119071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.204 qpair failed and we were unable to recover it. 00:24:00.204 [2024-07-25 13:52:57.128895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.204 [2024-07-25 13:52:57.128987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.204 [2024-07-25 13:52:57.129011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.129026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.129040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.129090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.138896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.139007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.139034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.139049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.139070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.139118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.148921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.148999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.149023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.149037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.149050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.149101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.158956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.159044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.159078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.159094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.159106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.159135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.169040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.169195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.169221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.169236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.169249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.169279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.179116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.179204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.179229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.179245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.179257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.179286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.189042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.189147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.189179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.189195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.189207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.189237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.199118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.199220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.199246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.199261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.199274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.199303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.209124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.209221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.209245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.209260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.209273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.209303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.219146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.219234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.219259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.219274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.219286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.219315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.205 [2024-07-25 13:52:57.229185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.205 [2024-07-25 13:52:57.229277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.205 [2024-07-25 13:52:57.229302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.205 [2024-07-25 13:52:57.229320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.205 [2024-07-25 13:52:57.229333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.205 [2024-07-25 13:52:57.229369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.205 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.239197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.239313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.239339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.239354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.239367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.239396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.249256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.249384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.249410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.249426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.249438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.249468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.259254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.259346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.259376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.259394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.259408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.259438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.269360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.269452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.269477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.269492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.269506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.269535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.279299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.279404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.279431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.279447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.279460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.279489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.289326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.289424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.289448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.289463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.289476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.289505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.299333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.299446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.299471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.299487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.299499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.299528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.309365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.309458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.309498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.309514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.309527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.309557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.319421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.319513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.319549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.319564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.319582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.319612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.329465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.329565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.465 [2024-07-25 13:52:57.329591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.465 [2024-07-25 13:52:57.329607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.465 [2024-07-25 13:52:57.329620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.465 [2024-07-25 13:52:57.329649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.465 qpair failed and we were unable to recover it. 00:24:00.465 [2024-07-25 13:52:57.339484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.465 [2024-07-25 13:52:57.339622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.339647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.339662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.339674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.339704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.349574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.349671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.349696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.349711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.349724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.349753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.359643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.359740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.359765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.359780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.359793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.359823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.369570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.369695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.369724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.369741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.369753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.369783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.379589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.379680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.379704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.379718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.379731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.379760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.389596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.389684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.389708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.389722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.389735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.389765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.399639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.399737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.399763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.399777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.399790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.399819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.409654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.409749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.409773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.409793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.409806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.409835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.419712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.419807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.419833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.419848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.419862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.419892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.429716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.429821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.429850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.429866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.429879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.429910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.439758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.439861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.439888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.439904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.439916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.439946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.449794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.449892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.449919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.449935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.449948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.449977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.459813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.459898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.459923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.459938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.459951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.459980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.469840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.469954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.469981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.469997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.470010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.470050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.479867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.479954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.466 [2024-07-25 13:52:57.479979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.466 [2024-07-25 13:52:57.479994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.466 [2024-07-25 13:52:57.480006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.466 [2024-07-25 13:52:57.480035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.466 qpair failed and we were unable to recover it. 00:24:00.466 [2024-07-25 13:52:57.489897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.466 [2024-07-25 13:52:57.489983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.467 [2024-07-25 13:52:57.490008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.467 [2024-07-25 13:52:57.490023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.467 [2024-07-25 13:52:57.490035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.467 [2024-07-25 13:52:57.490072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.467 qpair failed and we were unable to recover it. 00:24:00.725 [2024-07-25 13:52:57.499973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.725 [2024-07-25 13:52:57.500085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.725 [2024-07-25 13:52:57.500112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.725 [2024-07-25 13:52:57.500133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.725 [2024-07-25 13:52:57.500146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.725 [2024-07-25 13:52:57.500176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.725 qpair failed and we were unable to recover it. 00:24:00.725 [2024-07-25 13:52:57.509966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.725 [2024-07-25 13:52:57.510105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.725 [2024-07-25 13:52:57.510131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.725 [2024-07-25 13:52:57.510147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.725 [2024-07-25 13:52:57.510160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.725 [2024-07-25 13:52:57.510200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.725 qpair failed and we were unable to recover it. 00:24:00.725 [2024-07-25 13:52:57.519981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.725 [2024-07-25 13:52:57.520103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.725 [2024-07-25 13:52:57.520132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.725 [2024-07-25 13:52:57.520148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.725 [2024-07-25 13:52:57.520160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.725 [2024-07-25 13:52:57.520190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.725 qpair failed and we were unable to recover it. 00:24:00.725 [2024-07-25 13:52:57.530021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.725 [2024-07-25 13:52:57.530145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.725 [2024-07-25 13:52:57.530171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.725 [2024-07-25 13:52:57.530187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.725 [2024-07-25 13:52:57.530200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.725 [2024-07-25 13:52:57.530229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.725 qpair failed and we were unable to recover it. 00:24:00.725 [2024-07-25 13:52:57.540066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.725 [2024-07-25 13:52:57.540163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.725 [2024-07-25 13:52:57.540189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.725 [2024-07-25 13:52:57.540205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.725 [2024-07-25 13:52:57.540218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.725 [2024-07-25 13:52:57.540247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.725 qpair failed and we were unable to recover it. 00:24:00.725 [2024-07-25 13:52:57.550054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.725 [2024-07-25 13:52:57.550153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.725 [2024-07-25 13:52:57.550178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.725 [2024-07-25 13:52:57.550193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.725 [2024-07-25 13:52:57.550206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.725 [2024-07-25 13:52:57.550236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.725 qpair failed and we were unable to recover it. 00:24:00.725 [2024-07-25 13:52:57.560108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.725 [2024-07-25 13:52:57.560219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.560245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.560261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.560273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.560303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.570152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.570252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.570276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.570291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.570304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.570333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.580144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.580237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.580261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.580276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.580288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.580318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.590236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.590329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.590360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.590376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.590389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.590419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.600227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.600319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.600344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.600358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.600371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.600400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.610278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.610378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.610406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.610425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.610438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.610482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.620253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.620346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.620371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.620386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.620398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.620428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.630304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.630393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.630420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.630435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.630448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.630483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.640343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.640440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.640466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.640481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.640494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.640523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.650429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.650533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.650559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.650575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.650588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.650616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.660369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.660466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.660491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.660506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.660518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.660547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.670417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.670553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.670580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.726 [2024-07-25 13:52:57.670595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.726 [2024-07-25 13:52:57.670623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.726 [2024-07-25 13:52:57.670652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.726 qpair failed and we were unable to recover it. 00:24:00.726 [2024-07-25 13:52:57.680503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.726 [2024-07-25 13:52:57.680597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.726 [2024-07-25 13:52:57.680626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.727 [2024-07-25 13:52:57.680642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.727 [2024-07-25 13:52:57.680655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.727 [2024-07-25 13:52:57.680684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.727 qpair failed and we were unable to recover it. 00:24:00.727 [2024-07-25 13:52:57.690447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.727 [2024-07-25 13:52:57.690545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.727 [2024-07-25 13:52:57.690571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.727 [2024-07-25 13:52:57.690587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.727 [2024-07-25 13:52:57.690599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.727 [2024-07-25 13:52:57.690628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.727 qpair failed and we were unable to recover it. 00:24:00.727 [2024-07-25 13:52:57.700502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.727 [2024-07-25 13:52:57.700612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.727 [2024-07-25 13:52:57.700637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.727 [2024-07-25 13:52:57.700653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.727 [2024-07-25 13:52:57.700665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.727 [2024-07-25 13:52:57.700694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.727 qpair failed and we were unable to recover it. 00:24:00.727 [2024-07-25 13:52:57.710547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.727 [2024-07-25 13:52:57.710643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.727 [2024-07-25 13:52:57.710669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.727 [2024-07-25 13:52:57.710684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.727 [2024-07-25 13:52:57.710697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.727 [2024-07-25 13:52:57.710725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.727 qpair failed and we were unable to recover it. 00:24:00.727 [2024-07-25 13:52:57.720542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.727 [2024-07-25 13:52:57.720677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.727 [2024-07-25 13:52:57.720702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.727 [2024-07-25 13:52:57.720718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.727 [2024-07-25 13:52:57.720735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.727 [2024-07-25 13:52:57.720780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.727 qpair failed and we were unable to recover it. 00:24:00.727 [2024-07-25 13:52:57.730635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.727 [2024-07-25 13:52:57.730731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.727 [2024-07-25 13:52:57.730757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.727 [2024-07-25 13:52:57.730773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.727 [2024-07-25 13:52:57.730785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.727 [2024-07-25 13:52:57.730814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.727 qpair failed and we were unable to recover it. 00:24:00.727 [2024-07-25 13:52:57.740599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.727 [2024-07-25 13:52:57.740693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.727 [2024-07-25 13:52:57.740719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.727 [2024-07-25 13:52:57.740735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.727 [2024-07-25 13:52:57.740748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.727 [2024-07-25 13:52:57.740789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.727 qpair failed and we were unable to recover it. 00:24:00.727 [2024-07-25 13:52:57.750694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.727 [2024-07-25 13:52:57.750801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.727 [2024-07-25 13:52:57.750827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.727 [2024-07-25 13:52:57.750842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.727 [2024-07-25 13:52:57.750855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.727 [2024-07-25 13:52:57.750884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.727 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.760725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.760818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.760844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.760860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.760873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.760902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.770681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.770806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.770833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.770848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.770860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.770890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.780700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.780790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.780814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.780829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.780841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.780870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.790772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.790910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.790936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.790951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.790964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.791007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.800773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.800880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.800907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.800923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.800936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.800966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.810819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.810912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.810938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.810958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.810971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.811001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.820809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.820933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.820959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.820974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.820986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.821016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.830825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.830946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.830973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.830988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.831001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.831029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.840915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.841004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.841028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.841042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.841055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.841096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.850953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.851050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.851084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.851100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.851113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.851142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.860995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.861097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.861121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.861136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.989 [2024-07-25 13:52:57.861149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.989 [2024-07-25 13:52:57.861178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.989 qpair failed and we were unable to recover it. 00:24:00.989 [2024-07-25 13:52:57.871007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.989 [2024-07-25 13:52:57.871116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.989 [2024-07-25 13:52:57.871141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.989 [2024-07-25 13:52:57.871156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.871168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.871198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.881033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.881135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.881162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.881177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.881190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.881219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.891051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.891158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.891184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.891199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.891211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.891241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.901037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.901141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.901168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.901188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.901202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.901232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.911084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.911173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.911201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.911217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.911230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.911260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.921130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.921214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.921239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.921253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.921266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.921295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.931158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.931271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.931298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.931314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.931327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.931365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.941226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.941320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.941344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.941359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.941372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.941400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.951225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.951305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.951330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.951344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.951362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.951391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.961244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.961369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.961395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.961410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.961424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.961452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.971266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.971363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.971387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.971402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.971415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.971444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.981303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.981428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.981453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.981468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.990 [2024-07-25 13:52:57.981480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.990 [2024-07-25 13:52:57.981509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.990 qpair failed and we were unable to recover it. 00:24:00.990 [2024-07-25 13:52:57.991324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.990 [2024-07-25 13:52:57.991455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.990 [2024-07-25 13:52:57.991485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.990 [2024-07-25 13:52:57.991502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.991 [2024-07-25 13:52:57.991515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.991 [2024-07-25 13:52:57.991544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.991 qpair failed and we were unable to recover it. 00:24:00.991 [2024-07-25 13:52:58.001380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.991 [2024-07-25 13:52:58.001504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.991 [2024-07-25 13:52:58.001529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.991 [2024-07-25 13:52:58.001544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.991 [2024-07-25 13:52:58.001556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.991 [2024-07-25 13:52:58.001585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.991 qpair failed and we were unable to recover it. 00:24:00.991 [2024-07-25 13:52:58.011392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.991 [2024-07-25 13:52:58.011482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.991 [2024-07-25 13:52:58.011507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.991 [2024-07-25 13:52:58.011521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.991 [2024-07-25 13:52:58.011535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.991 [2024-07-25 13:52:58.011564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.991 qpair failed and we were unable to recover it. 00:24:00.991 [2024-07-25 13:52:58.021474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:00.991 [2024-07-25 13:52:58.021561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:00.991 [2024-07-25 13:52:58.021589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:00.991 [2024-07-25 13:52:58.021605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:00.991 [2024-07-25 13:52:58.021617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:00.991 [2024-07-25 13:52:58.021647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.991 qpair failed and we were unable to recover it. 00:24:01.252 [2024-07-25 13:52:58.031405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.252 [2024-07-25 13:52:58.031492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.252 [2024-07-25 13:52:58.031517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.252 [2024-07-25 13:52:58.031532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.252 [2024-07-25 13:52:58.031545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.252 [2024-07-25 13:52:58.031579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.252 qpair failed and we were unable to recover it. 00:24:01.252 [2024-07-25 13:52:58.041508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.252 [2024-07-25 13:52:58.041607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.252 [2024-07-25 13:52:58.041632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.252 [2024-07-25 13:52:58.041647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.252 [2024-07-25 13:52:58.041659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.252 [2024-07-25 13:52:58.041689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.252 qpair failed and we were unable to recover it. 00:24:01.252 [2024-07-25 13:52:58.051483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.252 [2024-07-25 13:52:58.051571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.252 [2024-07-25 13:52:58.051596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.252 [2024-07-25 13:52:58.051612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.252 [2024-07-25 13:52:58.051625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.252 [2024-07-25 13:52:58.051654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.252 qpair failed and we were unable to recover it. 00:24:01.252 [2024-07-25 13:52:58.061657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.061795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.061819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.061834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.061847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.061876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.071649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.071736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.071761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.071776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.071789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.071818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.081578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.081676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.081705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.081721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.081734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.081763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.091679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.091792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.091817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.091832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.091845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.091885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.101638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.101762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.101787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.101803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.101815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.101845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.111696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.111785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.111809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.111824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.111837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.111866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.121715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.121807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.121832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.121847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.121865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.121896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.131756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.131862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.131888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.131903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.131916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.131960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.141752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.141839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.141864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.141880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.141893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.141923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.151799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.151889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.151918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.151935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.151948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.151991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.161791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.161891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.161917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.161931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.161944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.161974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.171887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.253 [2024-07-25 13:52:58.172035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.253 [2024-07-25 13:52:58.172069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.253 [2024-07-25 13:52:58.172088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.253 [2024-07-25 13:52:58.172101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.253 [2024-07-25 13:52:58.172130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.253 qpair failed and we were unable to recover it. 00:24:01.253 [2024-07-25 13:52:58.181895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.181983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.182008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.182022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.182035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.182070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.191877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.191963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.191988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.192003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.192016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.192045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.201916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.202002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.202027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.202042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.202055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.202093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.211997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.212091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.212120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.212135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.212154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.212185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.221977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.222076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.222102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.222118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.222130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.222160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.232025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.232136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.232162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.232177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.232190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.232219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.242022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.242118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.242143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.242158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.242170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.242201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.252047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.252157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.252183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.252197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.252210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.252240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.262102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.262194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.262219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.262232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.262245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.262275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.272100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.272190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.272215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.272230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.272243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.272271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.254 [2024-07-25 13:52:58.282146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.254 [2024-07-25 13:52:58.282234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.254 [2024-07-25 13:52:58.282260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.254 [2024-07-25 13:52:58.282274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.254 [2024-07-25 13:52:58.282287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.254 [2024-07-25 13:52:58.282317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.254 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.292200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.292303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.292339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.292355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.515 [2024-07-25 13:52:58.292368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.515 [2024-07-25 13:52:58.292397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.515 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.302291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.302382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.302407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.302432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.515 [2024-07-25 13:52:58.302446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.515 [2024-07-25 13:52:58.302476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.515 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.312263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.312354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.312379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.312395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.515 [2024-07-25 13:52:58.312408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.515 [2024-07-25 13:52:58.312437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.515 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.322283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.322369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.322394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.322409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.515 [2024-07-25 13:52:58.322421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.515 [2024-07-25 13:52:58.322451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.515 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.332377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.332473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.332497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.332512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.515 [2024-07-25 13:52:58.332525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.515 [2024-07-25 13:52:58.332554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.515 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.342302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.342389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.342414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.342429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.515 [2024-07-25 13:52:58.342442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.515 [2024-07-25 13:52:58.342484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.515 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.352316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.352437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.352463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.352478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.515 [2024-07-25 13:52:58.352490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.515 [2024-07-25 13:52:58.352520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.515 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.362337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.362419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.362443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.362457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.515 [2024-07-25 13:52:58.362470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.515 [2024-07-25 13:52:58.362499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.515 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.372428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.372528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.372553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.372569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.515 [2024-07-25 13:52:58.372582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.515 [2024-07-25 13:52:58.372623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.515 qpair failed and we were unable to recover it. 00:24:01.515 [2024-07-25 13:52:58.382444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.515 [2024-07-25 13:52:58.382537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.515 [2024-07-25 13:52:58.382561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.515 [2024-07-25 13:52:58.382576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.382588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.382618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.392441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.392564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.392593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.392610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.392623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.392652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.402598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.402738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.402779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.402795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.402807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.402851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.412508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.412623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.412648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.412663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.412676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.412705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.422652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.422785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.422812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.422827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.422841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.422875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.432553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.432688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.432715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.432730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.432743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.432789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.442613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.442725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.442752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.442768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.442781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.442812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.452650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.452754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.452778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.452793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.452806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.452835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.462637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.462722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.462746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.462761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.462775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.462804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.472667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.472784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.472810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.472825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.472838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.472868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.482725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.482808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.482838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.482853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.482866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.482907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.492728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.492821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.492853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.492868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.492881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.492910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.502757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.502856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.502881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.502896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.502909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.502938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.512787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.512873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.516 [2024-07-25 13:52:58.512898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.516 [2024-07-25 13:52:58.512913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.516 [2024-07-25 13:52:58.512926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.516 [2024-07-25 13:52:58.512955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.516 qpair failed and we were unable to recover it. 00:24:01.516 [2024-07-25 13:52:58.522820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.516 [2024-07-25 13:52:58.522904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.517 [2024-07-25 13:52:58.522932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.517 [2024-07-25 13:52:58.522948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.517 [2024-07-25 13:52:58.522967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.517 [2024-07-25 13:52:58.522998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.517 qpair failed and we were unable to recover it. 00:24:01.517 [2024-07-25 13:52:58.532851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.517 [2024-07-25 13:52:58.532943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.517 [2024-07-25 13:52:58.532969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.517 [2024-07-25 13:52:58.532984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.517 [2024-07-25 13:52:58.532998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.517 [2024-07-25 13:52:58.533026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.517 qpair failed and we were unable to recover it. 00:24:01.517 [2024-07-25 13:52:58.542876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.517 [2024-07-25 13:52:58.543015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.517 [2024-07-25 13:52:58.543042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.517 [2024-07-25 13:52:58.543057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.517 [2024-07-25 13:52:58.543078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.517 [2024-07-25 13:52:58.543108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.517 qpair failed and we were unable to recover it. 00:24:01.778 [2024-07-25 13:52:58.552999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.778 [2024-07-25 13:52:58.553095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.778 [2024-07-25 13:52:58.553121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.778 [2024-07-25 13:52:58.553136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.778 [2024-07-25 13:52:58.553150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.778 [2024-07-25 13:52:58.553179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.778 qpair failed and we were unable to recover it. 00:24:01.778 [2024-07-25 13:52:58.562906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.778 [2024-07-25 13:52:58.562988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.778 [2024-07-25 13:52:58.563012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.778 [2024-07-25 13:52:58.563026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.778 [2024-07-25 13:52:58.563039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.778 [2024-07-25 13:52:58.563076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.778 qpair failed and we were unable to recover it. 00:24:01.778 [2024-07-25 13:52:58.572985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.778 [2024-07-25 13:52:58.573088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.778 [2024-07-25 13:52:58.573125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.778 [2024-07-25 13:52:58.573140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.778 [2024-07-25 13:52:58.573153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.778 [2024-07-25 13:52:58.573182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.778 qpair failed and we were unable to recover it. 00:24:01.778 [2024-07-25 13:52:58.582975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.778 [2024-07-25 13:52:58.583069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.778 [2024-07-25 13:52:58.583105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.778 [2024-07-25 13:52:58.583119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.778 [2024-07-25 13:52:58.583132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.778 [2024-07-25 13:52:58.583161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.778 qpair failed and we were unable to recover it. 00:24:01.778 [2024-07-25 13:52:58.593116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.778 [2024-07-25 13:52:58.593204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.778 [2024-07-25 13:52:58.593228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.778 [2024-07-25 13:52:58.593243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.778 [2024-07-25 13:52:58.593256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.778 [2024-07-25 13:52:58.593285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.778 qpair failed and we were unable to recover it. 00:24:01.778 [2024-07-25 13:52:58.603030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.778 [2024-07-25 13:52:58.603135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.778 [2024-07-25 13:52:58.603160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.778 [2024-07-25 13:52:58.603175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.778 [2024-07-25 13:52:58.603188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.778 [2024-07-25 13:52:58.603227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.778 qpair failed and we were unable to recover it. 00:24:01.778 [2024-07-25 13:52:58.613106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.778 [2024-07-25 13:52:58.613196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.778 [2024-07-25 13:52:58.613223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.778 [2024-07-25 13:52:58.613238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.778 [2024-07-25 13:52:58.613256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.778 [2024-07-25 13:52:58.613299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.778 qpair failed and we were unable to recover it. 00:24:01.778 [2024-07-25 13:52:58.623197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.778 [2024-07-25 13:52:58.623287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.778 [2024-07-25 13:52:58.623311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.778 [2024-07-25 13:52:58.623326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.778 [2024-07-25 13:52:58.623339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.778 [2024-07-25 13:52:58.623368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.778 qpair failed and we were unable to recover it. 00:24:01.778 [2024-07-25 13:52:58.633129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.633215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.633241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.633255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.633268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.633298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.643179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.643273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.643297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.643312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.643325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.643354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.653307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.653401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.653425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.653441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.653454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.653483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.663225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.663313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.663339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.663354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.663366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.663396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.673284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.673401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.673426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.673441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.673454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.673483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.683292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.683372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.683397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.683411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.683424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.683453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.693363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.693456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.693481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.693496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.693509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.693550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.703335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.703424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.703450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.703470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.703484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.703514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.713372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.713455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.713481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.713496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.713509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.713539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.723393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.723480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.723508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.723524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.723537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.723567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.733421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.733553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.733577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.733593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.733605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.733634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.743588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.779 [2024-07-25 13:52:58.743725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.779 [2024-07-25 13:52:58.743750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.779 [2024-07-25 13:52:58.743766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.779 [2024-07-25 13:52:58.743779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.779 [2024-07-25 13:52:58.743808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.779 qpair failed and we were unable to recover it. 00:24:01.779 [2024-07-25 13:52:58.753519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.780 [2024-07-25 13:52:58.753606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.780 [2024-07-25 13:52:58.753632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.780 [2024-07-25 13:52:58.753646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.780 [2024-07-25 13:52:58.753659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.780 [2024-07-25 13:52:58.753703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.780 qpair failed and we were unable to recover it. 00:24:01.780 [2024-07-25 13:52:58.763537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.780 [2024-07-25 13:52:58.763631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.780 [2024-07-25 13:52:58.763655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.780 [2024-07-25 13:52:58.763670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.780 [2024-07-25 13:52:58.763683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.780 [2024-07-25 13:52:58.763713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.780 qpair failed and we were unable to recover it. 00:24:01.780 [2024-07-25 13:52:58.773533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.780 [2024-07-25 13:52:58.773620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.780 [2024-07-25 13:52:58.773645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.780 [2024-07-25 13:52:58.773660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.780 [2024-07-25 13:52:58.773673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.780 [2024-07-25 13:52:58.773702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.780 qpair failed and we were unable to recover it. 00:24:01.780 [2024-07-25 13:52:58.783645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.780 [2024-07-25 13:52:58.783736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.780 [2024-07-25 13:52:58.783761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.780 [2024-07-25 13:52:58.783775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.780 [2024-07-25 13:52:58.783788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.780 [2024-07-25 13:52:58.783817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.780 qpair failed and we were unable to recover it. 00:24:01.780 [2024-07-25 13:52:58.793572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.780 [2024-07-25 13:52:58.793654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.780 [2024-07-25 13:52:58.793685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.780 [2024-07-25 13:52:58.793701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.780 [2024-07-25 13:52:58.793714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.780 [2024-07-25 13:52:58.793743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.780 qpair failed and we were unable to recover it. 00:24:01.780 [2024-07-25 13:52:58.803641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:01.780 [2024-07-25 13:52:58.803730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:01.780 [2024-07-25 13:52:58.803755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:01.780 [2024-07-25 13:52:58.803771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:01.780 [2024-07-25 13:52:58.803783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:01.780 [2024-07-25 13:52:58.803814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:01.780 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.813657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.813747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.813772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.813787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.813799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.813829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.823716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.823803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.823829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.823844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.823856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.823897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.833734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.833829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.833854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.833869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.833882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.833917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.843734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.843835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.843861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.843875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.843888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.843917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.853807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.853902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.853926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.853942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.853955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.853995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.863805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.863897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.863921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.863936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.863949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.863979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.873840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.873935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.873960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.873974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.873987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.874017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.883848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.883931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.883962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.883978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.883991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.884020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.893907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.894002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.894027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.894042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.894054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.894092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.903931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.904021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.904046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.904068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.904083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.904113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.913939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.914023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.914048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.914076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.914091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.914121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.924029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.924126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.924151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.924166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.924179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.924214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.934014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.934142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.934167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.934183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.934196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.934226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.944027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.944134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.944159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.944174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.944187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.944217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.954064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.041 [2024-07-25 13:52:58.954160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.041 [2024-07-25 13:52:58.954184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.041 [2024-07-25 13:52:58.954199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.041 [2024-07-25 13:52:58.954212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.041 [2024-07-25 13:52:58.954242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.041 qpair failed and we were unable to recover it. 00:24:02.041 [2024-07-25 13:52:58.964098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:58.964182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:58.964207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:58.964221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:58.964234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.042 [2024-07-25 13:52:58.964264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:58.974166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:58.974266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:58.974291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:58.974306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:58.974319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.042 [2024-07-25 13:52:58.974348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:58.984237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:58.984342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:58.984368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:58.984383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:58.984396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.042 [2024-07-25 13:52:58.984426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:58.994198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:58.994286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:58.994311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:58.994326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:58.994340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.042 [2024-07-25 13:52:58.994380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:59.004199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:59.004284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:59.004308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:59.004323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:59.004335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.042 [2024-07-25 13:52:59.004365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:59.014262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:59.014349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:59.014373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:59.014387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:59.014405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.042 [2024-07-25 13:52:59.014436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:59.024297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:59.024385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:59.024410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:59.024425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:59.024438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.042 [2024-07-25 13:52:59.024466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:59.034323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:59.034446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:59.034472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:59.034487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:59.034499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.042 [2024-07-25 13:52:59.034529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:59.044359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:59.044448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:59.044476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:59.044492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:59.044505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3c90000b90 00:24:02.042 [2024-07-25 13:52:59.044534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:59.044664] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:02.042 A controller has encountered a failure and is being reset. 00:24:02.042 [2024-07-25 13:52:59.054377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:59.054469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:59.054500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:59.054516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:59.054529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x118b250 00:24:02.042 [2024-07-25 13:52:59.054559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.042 [2024-07-25 13:52:59.064408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:02.042 [2024-07-25 13:52:59.064492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:02.042 [2024-07-25 13:52:59.064518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:02.042 [2024-07-25 13:52:59.064533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:02.042 [2024-07-25 13:52:59.064546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x118b250 00:24:02.042 [2024-07-25 13:52:59.064574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:02.042 qpair failed and we were unable to recover it. 00:24:02.300 Controller properly reset. 00:24:02.300 Initializing NVMe Controllers 00:24:02.300 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:02.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:02.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:02.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:02.300 Initialization complete. Launching workers. 00:24:02.300 Starting thread on core 1 00:24:02.300 Starting thread on core 2 00:24:02.300 Starting thread on core 3 00:24:02.300 Starting thread on core 0 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:02.300 00:24:02.300 real 0m10.953s 00:24:02.300 user 0m19.009s 00:24:02.300 sys 0m5.363s 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:02.300 ************************************ 00:24:02.300 END TEST nvmf_target_disconnect_tc2 00:24:02.300 ************************************ 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.300 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.300 rmmod nvme_tcp 00:24:02.300 rmmod nvme_fabrics 00:24:02.300 rmmod nvme_keyring 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 665647 ']' 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 665647 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 665647 ']' 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 665647 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 665647 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 665647' 00:24:02.559 killing process with pid 665647 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 665647 00:24:02.559 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 665647 00:24:02.818 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.818 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.818 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.818 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.818 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.818 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.818 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.818 13:52:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.726 13:53:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.726 00:24:04.726 real 0m15.728s 00:24:04.726 user 0m45.663s 00:24:04.726 sys 0m7.278s 00:24:04.726 13:53:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:04.726 13:53:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:04.726 ************************************ 00:24:04.726 END TEST nvmf_target_disconnect 00:24:04.726 ************************************ 00:24:04.726 13:53:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:04.726 00:24:04.726 real 4m56.444s 00:24:04.726 user 10m28.424s 00:24:04.726 sys 1m13.286s 00:24:04.726 13:53:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:04.726 13:53:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.726 ************************************ 00:24:04.726 END TEST nvmf_host 00:24:04.726 ************************************ 00:24:04.726 00:24:04.726 real 19m8.912s 00:24:04.726 user 45m6.291s 00:24:04.726 sys 4m52.586s 00:24:04.726 13:53:01 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:04.726 13:53:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.726 ************************************ 00:24:04.726 END TEST nvmf_tcp 00:24:04.726 ************************************ 00:24:04.726 13:53:01 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:24:04.726 13:53:01 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:04.726 13:53:01 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:04.726 13:53:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:04.726 13:53:01 -- common/autotest_common.sh@10 -- # set +x 00:24:04.985 ************************************ 00:24:04.985 START TEST spdkcli_nvmf_tcp 00:24:04.985 ************************************ 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:04.985 * Looking for test storage... 00:24:04.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.985 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=666848 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 666848 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 666848 ']' 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.986 13:53:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.986 [2024-07-25 13:53:01.898168] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:04.986 [2024-07-25 13:53:01.898254] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666848 ] 00:24:04.986 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.986 [2024-07-25 13:53:01.958848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:05.244 [2024-07-25 13:53:02.068143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.244 [2024-07-25 13:53:02.068147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:05.244 13:53:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.245 13:53:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:05.245 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:05.245 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:05.245 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:05.245 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:05.245 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:05.245 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:05.245 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:05.245 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:05.245 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:05.245 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:05.245 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:05.245 ' 00:24:07.787 [2024-07-25 13:53:04.764443] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.164 [2024-07-25 13:53:05.980641] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:11.696 [2024-07-25 13:53:08.235579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:13.601 [2024-07-25 13:53:10.173583] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:14.979 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:14.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:14.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:14.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:14.979 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:14.979 13:53:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:14.979 13:53:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:14.979 13:53:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.979 13:53:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:14.979 13:53:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.979 13:53:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.979 13:53:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:24:14.979 13:53:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:15.238 13:53:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:15.238 13:53:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:15.238 13:53:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:15.238 13:53:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.238 13:53:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.238 13:53:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:15.238 13:53:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.238 13:53:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.238 13:53:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:15.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:15.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:15.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:15.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:15.238 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:15.239 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:15.239 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:15.239 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:15.239 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:15.239 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:15.239 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:15.239 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:15.239 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:15.239 ' 00:24:20.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:20.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:20.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:20.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:20.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:20.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:20.510 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:20.510 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:20.510 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:20.510 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:20.510 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:20.510 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:20.510 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:20.510 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 666848 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 666848 ']' 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 666848 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 666848 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 666848' 00:24:20.510 killing process with pid 666848 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 666848 00:24:20.510 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 666848 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 666848 ']' 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 666848 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 666848 ']' 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 666848 00:24:20.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (666848) - No such process 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 666848 is not found' 00:24:20.769 Process with pid 666848 is not found 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:20.769 00:24:20.769 real 0m15.982s 00:24:20.769 user 0m33.662s 00:24:20.769 sys 0m0.823s 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.769 13:53:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.769 ************************************ 00:24:20.769 END TEST spdkcli_nvmf_tcp 00:24:20.769 ************************************ 00:24:20.769 13:53:17 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:20.769 13:53:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.769 13:53:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.769 13:53:17 -- common/autotest_common.sh@10 -- # set +x 00:24:21.028 ************************************ 00:24:21.028 START TEST nvmf_identify_passthru 00:24:21.028 ************************************ 00:24:21.028 13:53:17 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:21.028 * Looking for test storage... 00:24:21.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:21.028 13:53:17 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.028 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:24:21.028 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.029 13:53:17 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.029 13:53:17 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.029 13:53:17 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.029 13:53:17 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.029 13:53:17 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.029 13:53:17 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.029 13:53:17 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:24:21.029 13:53:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.029 13:53:17 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.029 13:53:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:21.029 13:53:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:21.029 13:53:17 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:24:21.029 13:53:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:22.932 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:22.932 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:22.932 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:22.932 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.932 13:53:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:23.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:24:23.191 00:24:23.191 --- 10.0.0.2 ping statistics --- 00:24:23.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.191 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:24:23.191 00:24:23.191 --- 10.0.0.1 ping statistics --- 00:24:23.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.191 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:23.191 13:53:20 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:23.191 13:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:23.191 13:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:24:23.191 13:53:20 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:24:23.191 13:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:24:23.191 13:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:24:23.191 13:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:24:23.191 13:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:23.191 13:53:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:23.191 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.387 13:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:24:27.387 13:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:24:27.387 13:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:27.387 13:53:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:27.387 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.585 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:24:31.585 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:24:31.585 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:31.585 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:31.845 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:24:31.845 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:31.845 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:31.845 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=671456 00:24:31.845 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:31.845 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.845 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 671456 00:24:31.845 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 671456 ']' 00:24:31.845 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.845 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:31.845 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.845 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:31.845 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:31.845 [2024-07-25 13:53:28.678750] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:31.845 [2024-07-25 13:53:28.678846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.845 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.845 [2024-07-25 13:53:28.741529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.845 [2024-07-25 13:53:28.840961] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.845 [2024-07-25 13:53:28.841023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.845 [2024-07-25 13:53:28.841047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.845 [2024-07-25 13:53:28.841064] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.845 [2024-07-25 13:53:28.841089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.845 [2024-07-25 13:53:28.841146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.845 [2024-07-25 13:53:28.841205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.845 [2024-07-25 13:53:28.841269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.845 [2024-07-25 13:53:28.841272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.845 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.105 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:24:32.105 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:24:32.105 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.105 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:32.105 INFO: Log level set to 20 00:24:32.105 INFO: Requests: 00:24:32.105 { 00:24:32.105 "jsonrpc": "2.0", 00:24:32.105 "method": "nvmf_set_config", 00:24:32.105 "id": 1, 00:24:32.105 "params": { 00:24:32.105 "admin_cmd_passthru": { 00:24:32.105 "identify_ctrlr": true 00:24:32.105 } 00:24:32.105 } 00:24:32.105 } 00:24:32.105 00:24:32.105 INFO: response: 00:24:32.105 { 00:24:32.105 "jsonrpc": "2.0", 00:24:32.105 "id": 1, 00:24:32.105 "result": true 00:24:32.105 } 00:24:32.105 00:24:32.105 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.105 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:24:32.105 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.105 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:32.105 INFO: Setting log level to 20 00:24:32.105 INFO: Setting log level to 20 00:24:32.105 INFO: Log level set to 20 00:24:32.105 INFO: Log level set to 20 00:24:32.105 INFO: Requests: 00:24:32.105 { 00:24:32.105 "jsonrpc": "2.0", 00:24:32.105 "method": "framework_start_init", 00:24:32.105 "id": 1 00:24:32.105 } 00:24:32.105 00:24:32.105 INFO: Requests: 00:24:32.105 { 00:24:32.105 "jsonrpc": "2.0", 00:24:32.105 "method": "framework_start_init", 00:24:32.105 "id": 1 00:24:32.105 } 00:24:32.105 00:24:32.105 [2024-07-25 13:53:28.993432] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:24:32.105 INFO: response: 00:24:32.105 { 00:24:32.105 "jsonrpc": "2.0", 00:24:32.105 "id": 1, 00:24:32.105 "result": true 00:24:32.105 } 00:24:32.105 00:24:32.105 INFO: response: 00:24:32.105 { 00:24:32.105 "jsonrpc": "2.0", 00:24:32.105 "id": 1, 00:24:32.105 "result": true 00:24:32.105 } 00:24:32.105 00:24:32.105 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.105 13:53:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.105 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.105 13:53:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:32.105 INFO: Setting log level to 40 00:24:32.105 INFO: Setting log level to 40 00:24:32.105 INFO: Setting log level to 40 00:24:32.105 [2024-07-25 13:53:29.003621] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.105 13:53:29 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.105 13:53:29 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:24:32.105 13:53:29 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.105 13:53:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:32.105 13:53:29 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:24:32.105 13:53:29 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.105 13:53:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:35.391 Nvme0n1 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.391 13:53:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.391 13:53:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.391 13:53:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:35.391 [2024-07-25 13:53:31.905376] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.391 13:53:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:35.391 [ 00:24:35.391 { 00:24:35.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:35.391 "subtype": "Discovery", 00:24:35.391 "listen_addresses": [], 00:24:35.391 "allow_any_host": true, 00:24:35.391 "hosts": [] 00:24:35.391 }, 00:24:35.391 { 00:24:35.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.391 "subtype": "NVMe", 00:24:35.391 "listen_addresses": [ 00:24:35.391 { 00:24:35.391 "trtype": "TCP", 00:24:35.391 "adrfam": "IPv4", 00:24:35.391 "traddr": "10.0.0.2", 00:24:35.391 "trsvcid": "4420" 00:24:35.391 } 00:24:35.391 ], 00:24:35.391 "allow_any_host": true, 00:24:35.391 "hosts": [], 00:24:35.391 "serial_number": "SPDK00000000000001", 00:24:35.391 "model_number": "SPDK bdev Controller", 00:24:35.391 "max_namespaces": 1, 00:24:35.391 "min_cntlid": 1, 00:24:35.391 "max_cntlid": 65519, 00:24:35.391 "namespaces": [ 00:24:35.391 { 00:24:35.391 "nsid": 1, 00:24:35.391 "bdev_name": "Nvme0n1", 00:24:35.391 "name": "Nvme0n1", 00:24:35.391 "nguid": "BC5DA8CF61AD4031B7AD2D9E42A67439", 00:24:35.391 "uuid": "bc5da8cf-61ad-4031-b7ad-2d9e42a67439" 00:24:35.391 } 00:24:35.391 ] 00:24:35.391 } 00:24:35.391 ] 00:24:35.391 13:53:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.391 13:53:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:35.392 13:53:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:24:35.392 13:53:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:24:35.392 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:24:35.392 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:24:35.392 13:53:32 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.392 rmmod nvme_tcp 00:24:35.392 rmmod nvme_fabrics 00:24:35.392 rmmod nvme_keyring 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 671456 ']' 00:24:35.392 13:53:32 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 671456 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 671456 ']' 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 671456 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 671456 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 671456' 00:24:35.392 killing process with pid 671456 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 671456 00:24:35.392 13:53:32 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 671456 00:24:37.324 13:53:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:37.324 13:53:33 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:37.324 13:53:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:37.324 13:53:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.324 13:53:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:37.324 13:53:33 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.324 13:53:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:37.324 13:53:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.229 13:53:35 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.229 00:24:39.229 real 0m18.126s 00:24:39.229 user 0m26.678s 00:24:39.229 sys 0m2.363s 00:24:39.229 13:53:35 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.229 13:53:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:24:39.229 ************************************ 00:24:39.229 END TEST nvmf_identify_passthru 00:24:39.229 ************************************ 00:24:39.229 13:53:35 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:24:39.229 13:53:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:39.229 13:53:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.229 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:24:39.229 ************************************ 00:24:39.229 START TEST nvmf_dif 00:24:39.229 ************************************ 00:24:39.229 13:53:35 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:24:39.229 * Looking for test storage... 00:24:39.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:39.229 13:53:36 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.229 13:53:36 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.229 13:53:36 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.229 13:53:36 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.229 13:53:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.229 13:53:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.229 13:53:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.229 13:53:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:39.229 13:53:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.229 13:53:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.230 13:53:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:39.230 13:53:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:39.230 13:53:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:39.230 13:53:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:39.230 13:53:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.230 13:53:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:39.230 13:53:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.230 13:53:36 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.230 13:53:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:41.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:41.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:41.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:41.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.137 13:53:37 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.138 13:53:37 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:41.138 13:53:37 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.138 13:53:37 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.138 13:53:37 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:41.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:24:41.138 00:24:41.138 --- 10.0.0.2 ping statistics --- 00:24:41.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.138 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:24:41.138 00:24:41.138 --- 10.0.0.1 ping statistics --- 00:24:41.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.138 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:41.138 13:53:38 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:42.516 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:42.516 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:42.516 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:42.516 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:42.516 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:42.516 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:42.516 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:42.516 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:42.516 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:42.516 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:42.516 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:42.516 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:42.516 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:42.516 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:42.516 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:42.516 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:42.516 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:42.516 13:53:39 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.516 13:53:39 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:42.516 13:53:39 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:42.516 13:53:39 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.516 13:53:39 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:42.516 13:53:39 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:42.516 13:53:39 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:42.516 13:53:39 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:42.516 13:53:39 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:42.516 13:53:39 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:42.516 13:53:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:42.516 13:53:39 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=674598 00:24:42.516 13:53:39 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:42.517 13:53:39 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 674598 00:24:42.517 13:53:39 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 674598 ']' 00:24:42.517 13:53:39 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.517 13:53:39 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.517 13:53:39 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.517 13:53:39 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.517 13:53:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:42.517 [2024-07-25 13:53:39.551677] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:42.517 [2024-07-25 13:53:39.551768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.775 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.775 [2024-07-25 13:53:39.616453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.775 [2024-07-25 13:53:39.728518] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.775 [2024-07-25 13:53:39.728572] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.775 [2024-07-25 13:53:39.728594] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.775 [2024-07-25 13:53:39.728613] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.775 [2024-07-25 13:53:39.728623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.775 [2024-07-25 13:53:39.728649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:24:43.034 13:53:39 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:43.034 13:53:39 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.034 13:53:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:43.034 13:53:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:43.034 [2024-07-25 13:53:39.868926] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.034 13:53:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:43.034 13:53:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:43.034 ************************************ 00:24:43.034 START TEST fio_dif_1_default 00:24:43.034 ************************************ 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:43.034 bdev_null0 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:43.034 [2024-07-25 13:53:39.929252] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:43.034 13:53:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:43.034 { 00:24:43.034 "params": { 00:24:43.034 "name": "Nvme$subsystem", 00:24:43.034 "trtype": "$TEST_TRANSPORT", 00:24:43.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.034 "adrfam": "ipv4", 00:24:43.034 "trsvcid": "$NVMF_PORT", 00:24:43.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.034 "hdgst": ${hdgst:-false}, 00:24:43.035 "ddgst": ${ddgst:-false} 00:24:43.035 }, 00:24:43.035 "method": "bdev_nvme_attach_controller" 00:24:43.035 } 00:24:43.035 EOF 00:24:43.035 )") 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:43.035 "params": { 00:24:43.035 "name": "Nvme0", 00:24:43.035 "trtype": "tcp", 00:24:43.035 "traddr": "10.0.0.2", 00:24:43.035 "adrfam": "ipv4", 00:24:43.035 "trsvcid": "4420", 00:24:43.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:43.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:43.035 "hdgst": false, 00:24:43.035 "ddgst": false 00:24:43.035 }, 00:24:43.035 "method": "bdev_nvme_attach_controller" 00:24:43.035 }' 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:43.035 13:53:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:43.294 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:43.294 fio-3.35 00:24:43.294 Starting 1 thread 00:24:43.294 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.502 00:24:55.502 filename0: (groupid=0, jobs=1): err= 0: pid=674852: Thu Jul 25 13:53:50 2024 00:24:55.502 read: IOPS=99, BW=398KiB/s (407kB/s)(3984KiB/10019msec) 00:24:55.502 slat (usec): min=4, max=125, avg= 9.81, stdev= 6.64 00:24:55.502 clat (usec): min=613, max=48251, avg=40203.34, stdev=5681.88 00:24:55.502 lat (usec): min=634, max=48278, avg=40213.15, stdev=5681.32 00:24:55.502 clat percentiles (usec): 00:24:55.502 | 1.00th=[ 660], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:24:55.502 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:24:55.502 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:24:55.502 | 99.00th=[41681], 99.50th=[41681], 99.90th=[48497], 99.95th=[48497], 00:24:55.502 | 99.99th=[48497] 00:24:55.502 bw ( KiB/s): min= 384, max= 448, per=99.59%, avg=396.80, stdev=19.14, samples=20 00:24:55.502 iops : min= 96, max= 112, avg=99.20, stdev= 4.79, samples=20 00:24:55.502 lat (usec) : 750=2.01% 00:24:55.502 lat (msec) : 50=97.99% 00:24:55.502 cpu : usr=90.16%, sys=9.50%, ctx=33, majf=0, minf=491 00:24:55.502 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:55.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:55.503 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:55.503 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:55.503 00:24:55.503 Run status group 0 (all jobs): 00:24:55.503 READ: bw=398KiB/s (407kB/s), 398KiB/s-398KiB/s (407kB/s-407kB/s), io=3984KiB (4080kB), run=10019-10019msec 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 00:24:55.503 real 0m11.314s 00:24:55.503 user 0m10.361s 00:24:55.503 sys 0m1.283s 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 ************************************ 00:24:55.503 END TEST fio_dif_1_default 00:24:55.503 ************************************ 00:24:55.503 13:53:51 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:55.503 13:53:51 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:55.503 13:53:51 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 ************************************ 00:24:55.503 START TEST fio_dif_1_multi_subsystems 00:24:55.503 ************************************ 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 bdev_null0 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 [2024-07-25 13:53:51.289501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 bdev_null1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:55.503 { 00:24:55.503 "params": { 00:24:55.503 "name": "Nvme$subsystem", 00:24:55.503 "trtype": "$TEST_TRANSPORT", 00:24:55.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.503 "adrfam": "ipv4", 00:24:55.503 "trsvcid": "$NVMF_PORT", 00:24:55.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.503 "hdgst": ${hdgst:-false}, 00:24:55.503 "ddgst": ${ddgst:-false} 00:24:55.503 }, 00:24:55.503 "method": "bdev_nvme_attach_controller" 00:24:55.503 } 00:24:55.503 EOF 00:24:55.503 )") 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:55.503 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:55.504 { 00:24:55.504 "params": { 00:24:55.504 "name": "Nvme$subsystem", 00:24:55.504 "trtype": "$TEST_TRANSPORT", 00:24:55.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.504 "adrfam": "ipv4", 00:24:55.504 "trsvcid": "$NVMF_PORT", 00:24:55.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.504 "hdgst": ${hdgst:-false}, 00:24:55.504 "ddgst": ${ddgst:-false} 00:24:55.504 }, 00:24:55.504 "method": "bdev_nvme_attach_controller" 00:24:55.504 } 00:24:55.504 EOF 00:24:55.504 )") 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:55.504 "params": { 00:24:55.504 "name": "Nvme0", 00:24:55.504 "trtype": "tcp", 00:24:55.504 "traddr": "10.0.0.2", 00:24:55.504 "adrfam": "ipv4", 00:24:55.504 "trsvcid": "4420", 00:24:55.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:55.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:55.504 "hdgst": false, 00:24:55.504 "ddgst": false 00:24:55.504 }, 00:24:55.504 "method": "bdev_nvme_attach_controller" 00:24:55.504 },{ 00:24:55.504 "params": { 00:24:55.504 "name": "Nvme1", 00:24:55.504 "trtype": "tcp", 00:24:55.504 "traddr": "10.0.0.2", 00:24:55.504 "adrfam": "ipv4", 00:24:55.504 "trsvcid": "4420", 00:24:55.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:55.504 "hdgst": false, 00:24:55.504 "ddgst": false 00:24:55.504 }, 00:24:55.504 "method": "bdev_nvme_attach_controller" 00:24:55.504 }' 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:55.504 13:53:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:55.504 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:55.504 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:55.504 fio-3.35 00:24:55.504 Starting 2 threads 00:24:55.504 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.468 00:25:05.468 filename0: (groupid=0, jobs=1): err= 0: pid=676335: Thu Jul 25 13:54:02 2024 00:25:05.468 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10007msec) 00:25:05.468 slat (nsec): min=7049, max=41905, avg=9380.76, stdev=3484.81 00:25:05.468 clat (usec): min=539, max=45489, avg=21082.21, stdev=20374.37 00:25:05.468 lat (usec): min=547, max=45523, avg=21091.59, stdev=20374.15 00:25:05.468 clat percentiles (usec): 00:25:05.468 | 1.00th=[ 578], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 627], 00:25:05.468 | 30.00th=[ 676], 40.00th=[ 906], 50.00th=[ 1074], 60.00th=[41157], 00:25:05.468 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:25:05.468 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:25:05.468 | 99.99th=[45351] 00:25:05.468 bw ( KiB/s): min= 704, max= 832, per=65.95%, avg=756.80, stdev=31.62, samples=20 00:25:05.468 iops : min= 176, max= 208, avg=189.20, stdev= 7.90, samples=20 00:25:05.468 lat (usec) : 750=36.23%, 1000=12.76% 00:25:05.468 lat (msec) : 2=1.00%, 50=50.00% 00:25:05.468 cpu : usr=94.27%, sys=5.43%, ctx=16, majf=0, minf=186 00:25:05.468 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:05.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.468 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:05.468 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:05.468 filename1: (groupid=0, jobs=1): err= 0: pid=676336: Thu Jul 25 13:54:02 2024 00:25:05.468 read: IOPS=97, BW=389KiB/s (398kB/s)(3888KiB/10007msec) 00:25:05.468 slat (nsec): min=7005, max=84601, avg=9975.90, stdev=5041.47 00:25:05.468 clat (usec): min=665, max=46839, avg=41148.45, stdev=2672.68 00:25:05.468 lat (usec): min=673, max=46890, avg=41158.43, stdev=2672.87 00:25:05.468 clat percentiles (usec): 00:25:05.468 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:25:05.468 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:25:05.468 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:05.468 | 99.00th=[42730], 99.50th=[43254], 99.90th=[46924], 99.95th=[46924], 00:25:05.468 | 99.99th=[46924] 00:25:05.468 bw ( KiB/s): min= 352, max= 416, per=33.76%, avg=387.20, stdev=14.31, samples=20 00:25:05.468 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:25:05.468 lat (usec) : 750=0.41% 00:25:05.468 lat (msec) : 50=99.59% 00:25:05.468 cpu : usr=94.13%, sys=5.56%, ctx=61, majf=0, minf=108 00:25:05.468 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:05.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.468 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:05.468 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:05.468 00:25:05.468 Run status group 0 (all jobs): 00:25:05.468 READ: bw=1146KiB/s (1174kB/s), 389KiB/s-758KiB/s (398kB/s-776kB/s), io=11.2MiB (11.7MB), run=10007-10007msec 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.726 00:25:05.726 real 0m11.445s 00:25:05.726 user 0m20.408s 00:25:05.726 sys 0m1.389s 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:05.726 13:54:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:05.726 ************************************ 00:25:05.726 END TEST fio_dif_1_multi_subsystems 00:25:05.726 ************************************ 00:25:05.726 13:54:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:05.726 13:54:02 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:05.726 13:54:02 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:05.726 13:54:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:05.726 ************************************ 00:25:05.726 START TEST fio_dif_rand_params 00:25:05.726 ************************************ 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.726 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:05.985 bdev_null0 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:05.985 [2024-07-25 13:54:02.781863] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:05.985 { 00:25:05.985 "params": { 00:25:05.985 "name": "Nvme$subsystem", 00:25:05.985 "trtype": "$TEST_TRANSPORT", 00:25:05.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.985 "adrfam": "ipv4", 00:25:05.985 "trsvcid": "$NVMF_PORT", 00:25:05.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.985 "hdgst": ${hdgst:-false}, 00:25:05.985 "ddgst": ${ddgst:-false} 00:25:05.985 }, 00:25:05.985 "method": "bdev_nvme_attach_controller" 00:25:05.985 } 00:25:05.985 EOF 00:25:05.985 )") 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:05.985 "params": { 00:25:05.985 "name": "Nvme0", 00:25:05.985 "trtype": "tcp", 00:25:05.985 "traddr": "10.0.0.2", 00:25:05.985 "adrfam": "ipv4", 00:25:05.985 "trsvcid": "4420", 00:25:05.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:05.985 "hdgst": false, 00:25:05.985 "ddgst": false 00:25:05.985 }, 00:25:05.985 "method": "bdev_nvme_attach_controller" 00:25:05.985 }' 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:05.985 13:54:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:06.246 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:06.246 ... 00:25:06.246 fio-3.35 00:25:06.246 Starting 3 threads 00:25:06.246 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.845 00:25:12.845 filename0: (groupid=0, jobs=1): err= 0: pid=677860: Thu Jul 25 13:54:08 2024 00:25:12.845 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(144MiB/5044msec) 00:25:12.845 slat (nsec): min=6260, max=45389, avg=14010.51, stdev=3904.74 00:25:12.845 clat (usec): min=4674, max=55410, avg=13103.64, stdev=5122.08 00:25:12.845 lat (usec): min=4687, max=55423, avg=13117.65, stdev=5122.10 00:25:12.845 clat percentiles (usec): 00:25:12.845 | 1.00th=[ 7242], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[10683], 00:25:12.845 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12780], 60.00th=[13435], 00:25:12.845 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15664], 95.00th=[16581], 00:25:12.845 | 99.00th=[47973], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:25:12.845 | 99.99th=[55313] 00:25:12.845 bw ( KiB/s): min=27136, max=32768, per=33.12%, avg=29388.80, stdev=1918.77, samples=10 00:25:12.845 iops : min= 212, max= 256, avg=229.60, stdev=14.99, samples=10 00:25:12.845 lat (msec) : 10=16.43%, 20=82.00%, 50=0.78%, 100=0.78% 00:25:12.845 cpu : usr=92.64%, sys=6.86%, ctx=19, majf=0, minf=97 00:25:12.845 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.845 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.845 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:12.845 filename0: (groupid=0, jobs=1): err= 0: pid=677861: Thu Jul 25 13:54:08 2024 00:25:12.845 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(144MiB/5044msec) 00:25:12.845 slat (nsec): min=6162, max=42961, avg=14310.90, stdev=3855.10 00:25:12.845 clat (usec): min=4433, max=91958, avg=13090.41, stdev=6753.76 00:25:12.845 lat (usec): min=4445, max=91970, avg=13104.72, stdev=6753.78 00:25:12.845 clat percentiles (usec): 00:25:12.845 | 1.00th=[ 5538], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[10421], 00:25:12.845 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12256], 60.00th=[12780], 00:25:12.845 | 70.00th=[13435], 80.00th=[14091], 90.00th=[15270], 95.00th=[16188], 00:25:12.845 | 99.00th=[51643], 99.50th=[52167], 99.90th=[61080], 99.95th=[91751], 00:25:12.845 | 99.99th=[91751] 00:25:12.845 bw ( KiB/s): min=23552, max=33536, per=33.15%, avg=29414.40, stdev=2887.37, samples=10 00:25:12.845 iops : min= 184, max= 262, avg=229.80, stdev=22.56, samples=10 00:25:12.845 lat (msec) : 10=16.51%, 20=81.06%, 50=0.87%, 100=1.56% 00:25:12.845 cpu : usr=92.43%, sys=7.08%, ctx=10, majf=0, minf=102 00:25:12.845 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.845 issued rwts: total=1151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.845 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:12.845 filename0: (groupid=0, jobs=1): err= 0: pid=677862: Thu Jul 25 13:54:08 2024 00:25:12.845 read: IOPS=237, BW=29.6MiB/s (31.1MB/s)(150MiB/5044msec) 00:25:12.845 slat (nsec): min=7686, max=65653, avg=15882.88, stdev=5998.20 00:25:12.845 clat (usec): min=4192, max=54253, avg=12595.52, stdev=6696.07 00:25:12.845 lat (usec): min=4206, max=54274, avg=12611.40, stdev=6696.18 00:25:12.845 clat percentiles (usec): 00:25:12.845 | 1.00th=[ 6849], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[10290], 00:25:12.845 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:25:12.845 | 70.00th=[12518], 80.00th=[13042], 90.00th=[14091], 95.00th=[15664], 00:25:12.845 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[54264], 00:25:12.845 | 99.99th=[54264] 00:25:12.845 bw ( KiB/s): min=23296, max=33792, per=34.45%, avg=30572.70, stdev=3236.18, samples=10 00:25:12.845 iops : min= 182, max= 264, avg=238.80, stdev=25.27, samples=10 00:25:12.845 lat (msec) : 10=17.06%, 20=80.27%, 50=0.75%, 100=1.92% 00:25:12.845 cpu : usr=91.28%, sys=7.40%, ctx=140, majf=0, minf=140 00:25:12.845 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:12.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.845 issued rwts: total=1196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.845 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:12.845 00:25:12.845 Run status group 0 (all jobs): 00:25:12.845 READ: bw=86.7MiB/s (90.9MB/s), 28.5MiB/s-29.6MiB/s (29.9MB/s-31.1MB/s), io=437MiB (458MB), run=5044-5044msec 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.845 bdev_null0 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.845 [2024-07-25 13:54:09.099970] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:12.845 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.846 bdev_null1 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.846 bdev_null2 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:12.846 { 00:25:12.846 "params": { 00:25:12.846 "name": "Nvme$subsystem", 00:25:12.846 "trtype": "$TEST_TRANSPORT", 00:25:12.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.846 "adrfam": "ipv4", 00:25:12.846 "trsvcid": "$NVMF_PORT", 00:25:12.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.846 "hdgst": ${hdgst:-false}, 00:25:12.846 "ddgst": ${ddgst:-false} 00:25:12.846 }, 00:25:12.846 "method": "bdev_nvme_attach_controller" 00:25:12.846 } 00:25:12.846 EOF 00:25:12.846 )") 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:12.846 { 00:25:12.846 "params": { 00:25:12.846 "name": "Nvme$subsystem", 00:25:12.846 "trtype": "$TEST_TRANSPORT", 00:25:12.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.846 "adrfam": "ipv4", 00:25:12.846 "trsvcid": "$NVMF_PORT", 00:25:12.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.846 "hdgst": ${hdgst:-false}, 00:25:12.846 "ddgst": ${ddgst:-false} 00:25:12.846 }, 00:25:12.846 "method": "bdev_nvme_attach_controller" 00:25:12.846 } 00:25:12.846 EOF 00:25:12.846 )") 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:12.846 { 00:25:12.846 "params": { 00:25:12.846 "name": "Nvme$subsystem", 00:25:12.846 "trtype": "$TEST_TRANSPORT", 00:25:12.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.846 "adrfam": "ipv4", 00:25:12.846 "trsvcid": "$NVMF_PORT", 00:25:12.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.846 "hdgst": ${hdgst:-false}, 00:25:12.846 "ddgst": ${ddgst:-false} 00:25:12.846 }, 00:25:12.846 "method": "bdev_nvme_attach_controller" 00:25:12.846 } 00:25:12.846 EOF 00:25:12.846 )") 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:12.846 13:54:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:12.846 "params": { 00:25:12.846 "name": "Nvme0", 00:25:12.846 "trtype": "tcp", 00:25:12.847 "traddr": "10.0.0.2", 00:25:12.847 "adrfam": "ipv4", 00:25:12.847 "trsvcid": "4420", 00:25:12.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:12.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:12.847 "hdgst": false, 00:25:12.847 "ddgst": false 00:25:12.847 }, 00:25:12.847 "method": "bdev_nvme_attach_controller" 00:25:12.847 },{ 00:25:12.847 "params": { 00:25:12.847 "name": "Nvme1", 00:25:12.847 "trtype": "tcp", 00:25:12.847 "traddr": "10.0.0.2", 00:25:12.847 "adrfam": "ipv4", 00:25:12.847 "trsvcid": "4420", 00:25:12.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:12.847 "hdgst": false, 00:25:12.847 "ddgst": false 00:25:12.847 }, 00:25:12.847 "method": "bdev_nvme_attach_controller" 00:25:12.847 },{ 00:25:12.847 "params": { 00:25:12.847 "name": "Nvme2", 00:25:12.847 "trtype": "tcp", 00:25:12.847 "traddr": "10.0.0.2", 00:25:12.847 "adrfam": "ipv4", 00:25:12.847 "trsvcid": "4420", 00:25:12.847 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:12.847 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:12.847 "hdgst": false, 00:25:12.847 "ddgst": false 00:25:12.847 }, 00:25:12.847 "method": "bdev_nvme_attach_controller" 00:25:12.847 }' 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:12.847 13:54:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:12.847 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:12.847 ... 00:25:12.847 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:12.847 ... 00:25:12.847 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:12.847 ... 00:25:12.847 fio-3.35 00:25:12.847 Starting 24 threads 00:25:12.847 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.043 00:25:25.043 filename0: (groupid=0, jobs=1): err= 0: pid=679139: Thu Jul 25 13:54:20 2024 00:25:25.043 read: IOPS=465, BW=1862KiB/s (1907kB/s)(18.2MiB/10001msec) 00:25:25.043 slat (usec): min=8, max=104, avg=28.78, stdev=12.59 00:25:25.043 clat (usec): min=21750, max=75516, avg=34124.90, stdev=3711.18 00:25:25.043 lat (usec): min=21792, max=75565, avg=34153.68, stdev=3710.29 00:25:25.043 clat percentiles (usec): 00:25:25.043 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:25:25.043 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:25:25.043 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42730], 95.00th=[42730], 00:25:25.043 | 99.00th=[43254], 99.50th=[43254], 99.90th=[63177], 99.95th=[63177], 00:25:25.043 | 99.99th=[76022] 00:25:25.043 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.37, stdev=161.73, samples=19 00:25:25.043 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.043 lat (msec) : 50=99.66%, 100=0.34% 00:25:25.043 cpu : usr=98.10%, sys=1.49%, ctx=17, majf=0, minf=37 00:25:25.043 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:25.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.043 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.043 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.043 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.043 filename0: (groupid=0, jobs=1): err= 0: pid=679140: Thu Jul 25 13:54:20 2024 00:25:25.043 read: IOPS=467, BW=1868KiB/s (1913kB/s)(18.2MiB/10003msec) 00:25:25.043 slat (nsec): min=8246, max=89391, avg=34539.69, stdev=15425.64 00:25:25.043 clat (usec): min=6112, max=62678, avg=33934.90, stdev=3927.03 00:25:25.043 lat (usec): min=6121, max=62718, avg=33969.44, stdev=3926.10 00:25:25.043 clat percentiles (usec): 00:25:25.043 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:25:25.043 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.044 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42206], 95.00th=[42730], 00:25:25.044 | 99.00th=[42730], 99.50th=[43254], 99.90th=[62653], 99.95th=[62653], 00:25:25.044 | 99.99th=[62653] 00:25:25.044 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.53, stdev=161.53, samples=19 00:25:25.044 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.044 lat (msec) : 10=0.04%, 20=0.30%, 50=99.32%, 100=0.34% 00:25:25.044 cpu : usr=98.19%, sys=1.41%, ctx=20, majf=0, minf=31 00:25:25.044 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename0: (groupid=0, jobs=1): err= 0: pid=679141: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10008msec) 00:25:25.044 slat (usec): min=9, max=113, avg=46.26, stdev=21.08 00:25:25.044 clat (usec): min=13656, max=67279, avg=33865.61, stdev=4049.41 00:25:25.044 lat (usec): min=13707, max=67315, avg=33911.87, stdev=4044.81 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[31327], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:25:25.044 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:25:25.044 | 70.00th=[32900], 80.00th=[33424], 90.00th=[42206], 95.00th=[42730], 00:25:25.044 | 99.00th=[42730], 99.50th=[43254], 99.90th=[67634], 99.95th=[67634], 00:25:25.044 | 99.99th=[67634] 00:25:25.044 bw ( KiB/s): min= 1536, max= 2048, per=4.12%, avg=1852.63, stdev=156.00, samples=19 00:25:25.044 iops : min= 384, max= 512, avg=463.16, stdev=39.00, samples=19 00:25:25.044 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:25:25.044 cpu : usr=98.39%, sys=1.19%, ctx=16, majf=0, minf=26 00:25:25.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename0: (groupid=0, jobs=1): err= 0: pid=679142: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10011msec) 00:25:25.044 slat (nsec): min=8867, max=99279, avg=36614.10, stdev=17512.23 00:25:25.044 clat (usec): min=13135, max=62589, avg=33976.61, stdev=3692.64 00:25:25.044 lat (usec): min=13154, max=62613, avg=34013.22, stdev=3689.12 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:25:25.044 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.044 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[42730], 00:25:25.044 | 99.00th=[42730], 99.50th=[43254], 99.90th=[55313], 99.95th=[55837], 00:25:25.044 | 99.99th=[62653] 00:25:25.044 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1862.40, stdev=146.68, samples=20 00:25:25.044 iops : min= 384, max= 512, avg=465.60, stdev=36.67, samples=20 00:25:25.044 lat (msec) : 20=0.30%, 50=99.36%, 100=0.34% 00:25:25.044 cpu : usr=97.83%, sys=1.59%, ctx=44, majf=0, minf=21 00:25:25.044 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename0: (groupid=0, jobs=1): err= 0: pid=679143: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10028msec) 00:25:25.044 slat (nsec): min=3548, max=63791, avg=20138.02, stdev=9377.04 00:25:25.044 clat (usec): min=2352, max=43174, avg=33454.28, stdev=5193.28 00:25:25.044 lat (usec): min=2360, max=43194, avg=33474.41, stdev=5192.97 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[ 7308], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:25:25.044 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:25:25.044 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42730], 95.00th=[42730], 00:25:25.044 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:25:25.044 | 99.99th=[43254] 00:25:25.044 bw ( KiB/s): min= 1408, max= 2592, per=4.24%, avg=1902.40, stdev=225.91, samples=20 00:25:25.044 iops : min= 352, max= 648, avg=475.60, stdev=56.48, samples=20 00:25:25.044 lat (msec) : 4=0.63%, 10=1.13%, 20=0.54%, 50=97.69% 00:25:25.044 cpu : usr=97.42%, sys=1.76%, ctx=193, majf=0, minf=34 00:25:25.044 IO depths : 1=6.1%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename0: (groupid=0, jobs=1): err= 0: pid=679145: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10011msec) 00:25:25.044 slat (usec): min=12, max=119, avg=45.56, stdev=21.47 00:25:25.044 clat (usec): min=25766, max=43230, avg=33881.10, stdev=3324.08 00:25:25.044 lat (usec): min=25832, max=43252, avg=33926.66, stdev=3318.50 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:25:25.044 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:25:25.044 | 70.00th=[32900], 80.00th=[33424], 90.00th=[42206], 95.00th=[42730], 00:25:25.044 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:25:25.044 | 99.99th=[43254] 00:25:25.044 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1862.40, stdev=158.00, samples=20 00:25:25.044 iops : min= 352, max= 512, avg=465.60, stdev=39.50, samples=20 00:25:25.044 lat (msec) : 50=100.00% 00:25:25.044 cpu : usr=97.11%, sys=1.94%, ctx=115, majf=0, minf=21 00:25:25.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename0: (groupid=0, jobs=1): err= 0: pid=679146: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=465, BW=1862KiB/s (1907kB/s)(18.2MiB/10001msec) 00:25:25.044 slat (usec): min=8, max=100, avg=34.79, stdev=18.31 00:25:25.044 clat (usec): min=22049, max=75646, avg=34054.75, stdev=3728.65 00:25:25.044 lat (usec): min=22082, max=75672, avg=34089.55, stdev=3725.59 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:25:25.044 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.044 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[42730], 00:25:25.044 | 99.00th=[43254], 99.50th=[43254], 99.90th=[63701], 99.95th=[63701], 00:25:25.044 | 99.99th=[76022] 00:25:25.044 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.37, stdev=161.73, samples=19 00:25:25.044 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.044 lat (msec) : 50=99.66%, 100=0.34% 00:25:25.044 cpu : usr=98.06%, sys=1.45%, ctx=66, majf=0, minf=32 00:25:25.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename0: (groupid=0, jobs=1): err= 0: pid=679147: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=466, BW=1867KiB/s (1911kB/s)(18.2MiB/10012msec) 00:25:25.044 slat (usec): min=7, max=117, avg=40.35, stdev=17.28 00:25:25.044 clat (usec): min=13488, max=64026, avg=33924.45, stdev=3884.96 00:25:25.044 lat (usec): min=13508, max=64045, avg=33964.80, stdev=3883.15 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:25:25.044 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.044 | 70.00th=[32900], 80.00th=[33424], 90.00th=[42206], 95.00th=[42730], 00:25:25.044 | 99.00th=[42730], 99.50th=[43254], 99.90th=[64226], 99.95th=[64226], 00:25:25.044 | 99.99th=[64226] 00:25:25.044 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.37, stdev=161.73, samples=19 00:25:25.044 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.044 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:25:25.044 cpu : usr=98.26%, sys=1.34%, ctx=29, majf=0, minf=24 00:25:25.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename1: (groupid=0, jobs=1): err= 0: pid=679148: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.5MiB/10002msec) 00:25:25.044 slat (usec): min=8, max=103, avg=26.39, stdev=15.19 00:25:25.044 clat (usec): min=13549, max=63066, avg=31774.42, stdev=6236.14 00:25:25.044 lat (usec): min=13566, max=63089, avg=31800.81, stdev=6239.91 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[20841], 5.00th=[21890], 10.00th=[22676], 20.00th=[26346], 00:25:25.044 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:25:25.044 | 70.00th=[32900], 80.00th=[33162], 90.00th=[42206], 95.00th=[42730], 00:25:25.044 | 99.00th=[43254], 99.50th=[50594], 99.90th=[63177], 99.95th=[63177], 00:25:25.044 | 99.99th=[63177] 00:25:25.044 bw ( KiB/s): min= 1408, max= 2528, per=4.45%, avg=1999.32, stdev=307.08, samples=19 00:25:25.044 iops : min= 352, max= 632, avg=499.79, stdev=76.81, samples=19 00:25:25.044 lat (msec) : 20=0.96%, 50=98.52%, 100=0.52% 00:25:25.044 cpu : usr=96.59%, sys=2.19%, ctx=698, majf=0, minf=28 00:25:25.044 IO depths : 1=3.3%, 2=6.8%, 4=15.3%, 8=64.3%, 16=10.4%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=91.7%, 8=3.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=5004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename1: (groupid=0, jobs=1): err= 0: pid=679149: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=467, BW=1868KiB/s (1913kB/s)(18.2MiB/10003msec) 00:25:25.044 slat (usec): min=8, max=109, avg=36.29, stdev=19.67 00:25:25.044 clat (usec): min=13656, max=62470, avg=33913.38, stdev=3895.96 00:25:25.044 lat (usec): min=13670, max=62510, avg=33949.67, stdev=3893.42 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:25:25.044 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.044 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42206], 95.00th=[42730], 00:25:25.044 | 99.00th=[42730], 99.50th=[43254], 99.90th=[62129], 99.95th=[62653], 00:25:25.044 | 99.99th=[62653] 00:25:25.044 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.37, stdev=161.73, samples=19 00:25:25.044 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.044 lat (msec) : 20=0.45%, 50=99.21%, 100=0.34% 00:25:25.044 cpu : usr=97.18%, sys=1.95%, ctx=113, majf=0, minf=32 00:25:25.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename1: (groupid=0, jobs=1): err= 0: pid=679150: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10008msec) 00:25:25.044 slat (usec): min=8, max=108, avg=34.21, stdev=12.10 00:25:25.044 clat (usec): min=13560, max=59471, avg=33963.54, stdev=3882.85 00:25:25.044 lat (usec): min=13590, max=59519, avg=33997.75, stdev=3882.63 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[25297], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:25:25.044 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.044 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:25:25.044 | 99.00th=[42730], 99.50th=[43254], 99.90th=[59507], 99.95th=[59507], 00:25:25.044 | 99.99th=[59507] 00:25:25.044 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.37, stdev=156.00, samples=19 00:25:25.044 iops : min= 384, max= 512, avg=464.84, stdev=39.00, samples=19 00:25:25.044 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:25:25.044 cpu : usr=97.49%, sys=1.79%, ctx=63, majf=0, minf=35 00:25:25.044 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename1: (groupid=0, jobs=1): err= 0: pid=679151: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=465, BW=1862KiB/s (1907kB/s)(18.2MiB/10001msec) 00:25:25.044 slat (nsec): min=8339, max=64879, avg=28458.27, stdev=10845.32 00:25:25.044 clat (usec): min=22179, max=63270, avg=34087.76, stdev=3662.65 00:25:25.044 lat (usec): min=22192, max=63300, avg=34116.22, stdev=3662.68 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:25:25.044 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.044 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:25:25.044 | 99.00th=[42730], 99.50th=[43254], 99.90th=[63177], 99.95th=[63177], 00:25:25.044 | 99.99th=[63177] 00:25:25.044 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.37, stdev=161.73, samples=19 00:25:25.044 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.044 lat (msec) : 50=99.66%, 100=0.34% 00:25:25.044 cpu : usr=98.10%, sys=1.49%, ctx=14, majf=0, minf=27 00:25:25.044 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.044 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.044 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.044 filename1: (groupid=0, jobs=1): err= 0: pid=679153: Thu Jul 25 13:54:20 2024 00:25:25.044 read: IOPS=465, BW=1862KiB/s (1907kB/s)(18.2MiB/10001msec) 00:25:25.044 slat (usec): min=11, max=125, avg=53.87, stdev=26.18 00:25:25.044 clat (usec): min=27306, max=63223, avg=33870.12, stdev=3596.38 00:25:25.044 lat (usec): min=27373, max=63252, avg=33923.99, stdev=3601.13 00:25:25.044 clat percentiles (usec): 00:25:25.044 | 1.00th=[31327], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:25:25.044 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.044 | 70.00th=[33162], 80.00th=[33424], 90.00th=[41681], 95.00th=[42206], 00:25:25.045 | 99.00th=[42730], 99.50th=[42730], 99.90th=[63177], 99.95th=[63177], 00:25:25.045 | 99.99th=[63177] 00:25:25.045 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.37, stdev=161.73, samples=19 00:25:25.045 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.045 lat (msec) : 50=99.66%, 100=0.34% 00:25:25.045 cpu : usr=97.56%, sys=1.67%, ctx=81, majf=0, minf=35 00:25:25.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename1: (groupid=0, jobs=1): err= 0: pid=679154: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=466, BW=1867KiB/s (1911kB/s)(18.2MiB/10012msec) 00:25:25.045 slat (usec): min=8, max=106, avg=32.06, stdev=20.40 00:25:25.045 clat (usec): min=21099, max=62715, avg=34016.24, stdev=3525.46 00:25:25.045 lat (usec): min=21108, max=62747, avg=34048.31, stdev=3520.45 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:25:25.045 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.045 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[42730], 00:25:25.045 | 99.00th=[43254], 99.50th=[43254], 99.90th=[51643], 99.95th=[51643], 00:25:25.045 | 99.99th=[62653] 00:25:25.045 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1862.40, stdev=158.00, samples=20 00:25:25.045 iops : min= 352, max= 512, avg=465.60, stdev=39.50, samples=20 00:25:25.045 lat (msec) : 50=99.66%, 100=0.34% 00:25:25.045 cpu : usr=98.12%, sys=1.48%, ctx=15, majf=0, minf=24 00:25:25.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename1: (groupid=0, jobs=1): err= 0: pid=679155: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=467, BW=1868KiB/s (1913kB/s)(18.2MiB/10004msec) 00:25:25.045 slat (usec): min=9, max=112, avg=41.02, stdev=18.52 00:25:25.045 clat (usec): min=13654, max=64040, avg=33895.08, stdev=3936.14 00:25:25.045 lat (usec): min=13676, max=64080, avg=33936.10, stdev=3934.00 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:25:25.045 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.045 | 70.00th=[32900], 80.00th=[33424], 90.00th=[42206], 95.00th=[42730], 00:25:25.045 | 99.00th=[43254], 99.50th=[43254], 99.90th=[63701], 99.95th=[64226], 00:25:25.045 | 99.99th=[64226] 00:25:25.045 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.37, stdev=161.73, samples=19 00:25:25.045 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.045 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:25:25.045 cpu : usr=98.04%, sys=1.55%, ctx=15, majf=0, minf=20 00:25:25.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename1: (groupid=0, jobs=1): err= 0: pid=679156: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=468, BW=1874KiB/s (1919kB/s)(18.3MiB/10004msec) 00:25:25.045 slat (nsec): min=8231, max=77621, avg=18421.07, stdev=9741.52 00:25:25.045 clat (usec): min=13463, max=43268, avg=33986.43, stdev=3677.86 00:25:25.045 lat (usec): min=13517, max=43300, avg=34004.85, stdev=3675.17 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[24511], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:25:25.045 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:25:25.045 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42730], 95.00th=[42730], 00:25:25.045 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:25:25.045 | 99.99th=[43254] 00:25:25.045 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1872.84, stdev=166.40, samples=19 00:25:25.045 iops : min= 352, max= 512, avg=468.21, stdev=41.60, samples=19 00:25:25.045 lat (msec) : 20=0.68%, 50=99.32% 00:25:25.045 cpu : usr=97.16%, sys=1.97%, ctx=139, majf=0, minf=59 00:25:25.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename2: (groupid=0, jobs=1): err= 0: pid=679157: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10011msec) 00:25:25.045 slat (nsec): min=9000, max=88820, avg=36678.40, stdev=14556.80 00:25:25.045 clat (usec): min=12924, max=55720, avg=33960.71, stdev=3795.81 00:25:25.045 lat (usec): min=12942, max=55745, avg=33997.39, stdev=3793.34 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[25035], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:25:25.045 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.045 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42206], 95.00th=[42730], 00:25:25.045 | 99.00th=[42730], 99.50th=[43254], 99.90th=[55837], 99.95th=[55837], 00:25:25.045 | 99.99th=[55837] 00:25:25.045 bw ( KiB/s): min= 1536, max= 2048, per=4.15%, avg=1862.40, stdev=146.03, samples=20 00:25:25.045 iops : min= 384, max= 512, avg=465.60, stdev=36.51, samples=20 00:25:25.045 lat (msec) : 20=0.30%, 50=99.36%, 100=0.34% 00:25:25.045 cpu : usr=96.26%, sys=2.41%, ctx=175, majf=0, minf=25 00:25:25.045 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename2: (groupid=0, jobs=1): err= 0: pid=679158: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=466, BW=1867KiB/s (1911kB/s)(18.2MiB/10012msec) 00:25:25.045 slat (usec): min=8, max=106, avg=41.02, stdev=18.88 00:25:25.045 clat (usec): min=13876, max=71815, avg=33934.86, stdev=4181.99 00:25:25.045 lat (usec): min=13911, max=71853, avg=33975.88, stdev=4179.34 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:25:25.045 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.045 | 70.00th=[32900], 80.00th=[33424], 90.00th=[42206], 95.00th=[42730], 00:25:25.045 | 99.00th=[43254], 99.50th=[43254], 99.90th=[71828], 99.95th=[71828], 00:25:25.045 | 99.99th=[71828] 00:25:25.045 bw ( KiB/s): min= 1408, max= 2048, per=4.12%, avg=1852.63, stdev=167.26, samples=19 00:25:25.045 iops : min= 352, max= 512, avg=463.16, stdev=41.82, samples=19 00:25:25.045 lat (msec) : 20=0.39%, 50=99.27%, 100=0.34% 00:25:25.045 cpu : usr=98.27%, sys=1.32%, ctx=18, majf=0, minf=35 00:25:25.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename2: (groupid=0, jobs=1): err= 0: pid=679160: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=474, BW=1896KiB/s (1942kB/s)(18.6MiB/10018msec) 00:25:25.045 slat (usec): min=7, max=126, avg=24.97, stdev=12.14 00:25:25.045 clat (usec): min=6383, max=43220, avg=33553.89, stdev=4471.42 00:25:25.045 lat (usec): min=6391, max=43241, avg=33578.87, stdev=4468.24 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[15008], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:25:25.045 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:25:25.045 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:25:25.045 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:25:25.045 | 99.99th=[43254] 00:25:25.045 bw ( KiB/s): min= 1536, max= 2408, per=4.22%, avg=1893.20, stdev=189.53, samples=20 00:25:25.045 iops : min= 384, max= 602, avg=473.30, stdev=47.38, samples=20 00:25:25.045 lat (msec) : 10=0.67%, 20=0.67%, 50=98.65% 00:25:25.045 cpu : usr=98.06%, sys=1.53%, ctx=16, majf=0, minf=27 00:25:25.045 IO depths : 1=6.0%, 2=12.0%, 4=24.2%, 8=51.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename2: (groupid=0, jobs=1): err= 0: pid=679161: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=465, BW=1862KiB/s (1907kB/s)(18.2MiB/10001msec) 00:25:25.045 slat (usec): min=9, max=102, avg=42.42, stdev=21.96 00:25:25.045 clat (usec): min=28076, max=63762, avg=33975.00, stdev=3695.99 00:25:25.045 lat (usec): min=28098, max=63785, avg=34017.42, stdev=3691.00 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:25:25.045 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:25:25.045 | 70.00th=[32900], 80.00th=[33424], 90.00th=[42206], 95.00th=[42730], 00:25:25.045 | 99.00th=[42730], 99.50th=[43254], 99.90th=[63701], 99.95th=[63701], 00:25:25.045 | 99.99th=[63701] 00:25:25.045 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.37, stdev=161.73, samples=19 00:25:25.045 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.045 lat (msec) : 50=99.66%, 100=0.34% 00:25:25.045 cpu : usr=98.16%, sys=1.43%, ctx=16, majf=0, minf=32 00:25:25.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename2: (groupid=0, jobs=1): err= 0: pid=679162: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=465, BW=1862KiB/s (1906kB/s)(18.2MiB/10004msec) 00:25:25.045 slat (usec): min=8, max=120, avg=54.65, stdev=22.87 00:25:25.045 clat (usec): min=25436, max=69943, avg=33890.88, stdev=3828.55 00:25:25.045 lat (usec): min=25496, max=69967, avg=33945.53, stdev=3832.98 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[31589], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:25:25.045 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.045 | 70.00th=[33162], 80.00th=[33424], 90.00th=[41681], 95.00th=[42206], 00:25:25.045 | 99.00th=[42730], 99.50th=[43254], 99.90th=[69731], 99.95th=[69731], 00:25:25.045 | 99.99th=[69731] 00:25:25.045 bw ( KiB/s): min= 1408, max= 2048, per=4.12%, avg=1852.63, stdev=167.26, samples=19 00:25:25.045 iops : min= 352, max= 512, avg=463.16, stdev=41.82, samples=19 00:25:25.045 lat (msec) : 50=99.66%, 100=0.34% 00:25:25.045 cpu : usr=97.32%, sys=1.75%, ctx=91, majf=0, minf=29 00:25:25.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename2: (groupid=0, jobs=1): err= 0: pid=679163: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10011msec) 00:25:25.045 slat (nsec): min=12508, max=65706, avg=31563.30, stdev=9247.36 00:25:25.045 clat (usec): min=26777, max=43219, avg=34002.37, stdev=3279.24 00:25:25.045 lat (usec): min=26813, max=43250, avg=34033.93, stdev=3278.48 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:25:25.045 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.045 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:25:25.045 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:25:25.045 | 99.99th=[43254] 00:25:25.045 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1862.40, stdev=158.00, samples=20 00:25:25.045 iops : min= 352, max= 512, avg=465.60, stdev=39.50, samples=20 00:25:25.045 lat (msec) : 50=100.00% 00:25:25.045 cpu : usr=97.02%, sys=1.95%, ctx=177, majf=0, minf=24 00:25:25.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename2: (groupid=0, jobs=1): err= 0: pid=679164: Thu Jul 25 13:54:20 2024 00:25:25.045 read: IOPS=467, BW=1868KiB/s (1913kB/s)(18.2MiB/10003msec) 00:25:25.045 slat (usec): min=8, max=116, avg=38.12, stdev=18.48 00:25:25.045 clat (usec): min=13637, max=73766, avg=33911.35, stdev=3957.28 00:25:25.045 lat (usec): min=13679, max=73808, avg=33949.47, stdev=3954.34 00:25:25.045 clat percentiles (usec): 00:25:25.045 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:25:25.045 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.045 | 70.00th=[32900], 80.00th=[33424], 90.00th=[42730], 95.00th=[42730], 00:25:25.045 | 99.00th=[42730], 99.50th=[43254], 99.90th=[62653], 99.95th=[62653], 00:25:25.045 | 99.99th=[73925] 00:25:25.045 bw ( KiB/s): min= 1536, max= 2048, per=4.14%, avg=1859.53, stdev=161.53, samples=19 00:25:25.045 iops : min= 384, max= 512, avg=464.84, stdev=40.43, samples=19 00:25:25.045 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:25:25.045 cpu : usr=97.26%, sys=2.00%, ctx=117, majf=0, minf=35 00:25:25.045 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.045 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.045 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.045 filename2: (groupid=0, jobs=1): err= 0: pid=679165: Thu Jul 25 13:54:20 2024 00:25:25.046 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10011msec) 00:25:25.046 slat (nsec): min=8404, max=77788, avg=30123.04, stdev=11473.25 00:25:25.046 clat (usec): min=25052, max=43711, avg=34018.10, stdev=3281.82 00:25:25.046 lat (usec): min=25062, max=43724, avg=34048.23, stdev=3281.57 00:25:25.046 clat percentiles (usec): 00:25:25.046 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32375], 00:25:25.046 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:25:25.046 | 70.00th=[33162], 80.00th=[33817], 90.00th=[42206], 95.00th=[42730], 00:25:25.046 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:25:25.046 | 99.99th=[43779] 00:25:25.046 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1862.40, stdev=158.00, samples=20 00:25:25.046 iops : min= 352, max= 512, avg=465.60, stdev=39.50, samples=20 00:25:25.046 lat (msec) : 50=100.00% 00:25:25.046 cpu : usr=96.86%, sys=1.99%, ctx=189, majf=0, minf=27 00:25:25.046 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:25:25.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.046 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:25.046 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:25.046 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:25.046 00:25:25.046 Run status group 0 (all jobs): 00:25:25.046 READ: bw=43.8MiB/s (46.0MB/s), 1862KiB/s-2001KiB/s (1906kB/s-2049kB/s), io=440MiB (461MB), run=10001-10028msec 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 bdev_null0 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 [2024-07-25 13:54:20.772299] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 bdev_null1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:25.046 { 00:25:25.046 "params": { 00:25:25.046 "name": "Nvme$subsystem", 00:25:25.046 "trtype": "$TEST_TRANSPORT", 00:25:25.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.046 "adrfam": "ipv4", 00:25:25.046 "trsvcid": "$NVMF_PORT", 00:25:25.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.046 "hdgst": ${hdgst:-false}, 00:25:25.046 "ddgst": ${ddgst:-false} 00:25:25.046 }, 00:25:25.046 "method": "bdev_nvme_attach_controller" 00:25:25.046 } 00:25:25.046 EOF 00:25:25.046 )") 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:25.046 { 00:25:25.046 "params": { 00:25:25.046 "name": "Nvme$subsystem", 00:25:25.046 "trtype": "$TEST_TRANSPORT", 00:25:25.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:25.046 "adrfam": "ipv4", 00:25:25.046 "trsvcid": "$NVMF_PORT", 00:25:25.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:25.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:25.046 "hdgst": ${hdgst:-false}, 00:25:25.046 "ddgst": ${ddgst:-false} 00:25:25.046 }, 00:25:25.046 "method": "bdev_nvme_attach_controller" 00:25:25.046 } 00:25:25.046 EOF 00:25:25.046 )") 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:25.046 "params": { 00:25:25.046 "name": "Nvme0", 00:25:25.046 "trtype": "tcp", 00:25:25.046 "traddr": "10.0.0.2", 00:25:25.046 "adrfam": "ipv4", 00:25:25.046 "trsvcid": "4420", 00:25:25.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:25.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:25.046 "hdgst": false, 00:25:25.046 "ddgst": false 00:25:25.046 }, 00:25:25.046 "method": "bdev_nvme_attach_controller" 00:25:25.046 },{ 00:25:25.046 "params": { 00:25:25.046 "name": "Nvme1", 00:25:25.046 "trtype": "tcp", 00:25:25.046 "traddr": "10.0.0.2", 00:25:25.046 "adrfam": "ipv4", 00:25:25.046 "trsvcid": "4420", 00:25:25.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:25.046 "hdgst": false, 00:25:25.046 "ddgst": false 00:25:25.046 }, 00:25:25.046 "method": "bdev_nvme_attach_controller" 00:25:25.046 }' 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:25.046 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:25.047 13:54:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:25.047 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:25.047 ... 00:25:25.047 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:25.047 ... 00:25:25.047 fio-3.35 00:25:25.047 Starting 4 threads 00:25:25.047 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.307 00:25:30.307 filename0: (groupid=0, jobs=1): err= 0: pid=680539: Thu Jul 25 13:54:27 2024 00:25:30.307 read: IOPS=1897, BW=14.8MiB/s (15.5MB/s)(74.2MiB/5004msec) 00:25:30.307 slat (nsec): min=3749, max=86800, avg=16191.12, stdev=9366.09 00:25:30.307 clat (usec): min=871, max=12056, avg=4159.01, stdev=550.79 00:25:30.307 lat (usec): min=888, max=12079, avg=4175.21, stdev=551.00 00:25:30.307 clat percentiles (usec): 00:25:30.307 | 1.00th=[ 2704], 5.00th=[ 3392], 10.00th=[ 3621], 20.00th=[ 3916], 00:25:30.307 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:25:30.307 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4686], 00:25:30.307 | 99.00th=[ 6325], 99.50th=[ 7177], 99.90th=[ 7701], 99.95th=[ 8586], 00:25:30.307 | 99.99th=[11994] 00:25:30.307 bw ( KiB/s): min=14656, max=16016, per=25.39%, avg=15177.60, stdev=382.53, samples=10 00:25:30.307 iops : min= 1832, max= 2002, avg=1897.20, stdev=47.82, samples=10 00:25:30.307 lat (usec) : 1000=0.04% 00:25:30.307 lat (msec) : 2=0.35%, 4=24.52%, 10=75.08%, 20=0.01% 00:25:30.307 cpu : usr=95.08%, sys=4.42%, ctx=12, majf=0, minf=57 00:25:30.307 IO depths : 1=0.7%, 2=18.1%, 4=54.9%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.307 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.307 issued rwts: total=9493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:30.307 filename0: (groupid=0, jobs=1): err= 0: pid=680540: Thu Jul 25 13:54:27 2024 00:25:30.307 read: IOPS=1831, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5001msec) 00:25:30.307 slat (usec): min=3, max=125, avg=20.06, stdev= 9.97 00:25:30.307 clat (usec): min=987, max=7734, avg=4291.73, stdev=642.58 00:25:30.307 lat (usec): min=1012, max=7746, avg=4311.78, stdev=641.89 00:25:30.307 clat percentiles (usec): 00:25:30.307 | 1.00th=[ 2573], 5.00th=[ 3589], 10.00th=[ 3851], 20.00th=[ 4080], 00:25:30.307 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:25:30.307 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4883], 95.00th=[ 5669], 00:25:30.307 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 7504], 99.95th=[ 7570], 00:25:30.307 | 99.99th=[ 7767] 00:25:30.307 bw ( KiB/s): min=14272, max=15072, per=24.58%, avg=14696.89, stdev=294.93, samples=9 00:25:30.307 iops : min= 1784, max= 1884, avg=1837.11, stdev=36.87, samples=9 00:25:30.307 lat (usec) : 1000=0.01% 00:25:30.307 lat (msec) : 2=0.64%, 4=13.40%, 10=85.95% 00:25:30.307 cpu : usr=84.70%, sys=9.30%, ctx=521, majf=0, minf=47 00:25:30.307 IO depths : 1=0.9%, 2=18.0%, 4=55.1%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.307 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.307 issued rwts: total=9158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:30.307 filename1: (groupid=0, jobs=1): err= 0: pid=680541: Thu Jul 25 13:54:27 2024 00:25:30.307 read: IOPS=1899, BW=14.8MiB/s (15.6MB/s)(74.2MiB/5002msec) 00:25:30.307 slat (nsec): min=3941, max=87674, avg=15177.12, stdev=8760.29 00:25:30.307 clat (usec): min=821, max=7480, avg=4161.35, stdev=487.00 00:25:30.307 lat (usec): min=838, max=7503, avg=4176.53, stdev=487.23 00:25:30.307 clat percentiles (usec): 00:25:30.307 | 1.00th=[ 2638], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3916], 00:25:30.307 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:25:30.307 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4686], 00:25:30.307 | 99.00th=[ 5997], 99.50th=[ 6783], 99.90th=[ 7308], 99.95th=[ 7308], 00:25:30.307 | 99.99th=[ 7504] 00:25:30.307 bw ( KiB/s): min=14720, max=15872, per=25.40%, avg=15188.60, stdev=390.84, samples=10 00:25:30.307 iops : min= 1840, max= 1984, avg=1898.50, stdev=48.86, samples=10 00:25:30.307 lat (usec) : 1000=0.01% 00:25:30.307 lat (msec) : 2=0.34%, 4=23.76%, 10=75.89% 00:25:30.307 cpu : usr=94.54%, sys=4.94%, ctx=34, majf=0, minf=40 00:25:30.307 IO depths : 1=0.5%, 2=14.3%, 4=58.6%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.307 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.307 issued rwts: total=9499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:30.307 filename1: (groupid=0, jobs=1): err= 0: pid=680542: Thu Jul 25 13:54:27 2024 00:25:30.307 read: IOPS=1848, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5002msec) 00:25:30.307 slat (nsec): min=4252, max=87025, avg=17486.41, stdev=9868.88 00:25:30.307 clat (usec): min=827, max=7793, avg=4266.03, stdev=574.97 00:25:30.307 lat (usec): min=855, max=7801, avg=4283.52, stdev=574.36 00:25:30.307 clat percentiles (usec): 00:25:30.307 | 1.00th=[ 2835], 5.00th=[ 3621], 10.00th=[ 3818], 20.00th=[ 4047], 00:25:30.307 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:25:30.307 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5342], 00:25:30.307 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 7504], 99.95th=[ 7635], 00:25:30.307 | 99.99th=[ 7767] 00:25:30.307 bw ( KiB/s): min=14528, max=15088, per=24.75%, avg=14794.90, stdev=172.65, samples=10 00:25:30.307 iops : min= 1816, max= 1886, avg=1849.30, stdev=21.61, samples=10 00:25:30.307 lat (usec) : 1000=0.09% 00:25:30.307 lat (msec) : 2=0.35%, 4=17.25%, 10=82.31% 00:25:30.307 cpu : usr=94.88%, sys=4.62%, ctx=11, majf=0, minf=31 00:25:30.307 IO depths : 1=0.4%, 2=17.8%, 4=55.7%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:30.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.307 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.307 issued rwts: total=9245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:30.307 00:25:30.307 Run status group 0 (all jobs): 00:25:30.307 READ: bw=58.4MiB/s (61.2MB/s), 14.3MiB/s-14.8MiB/s (15.0MB/s-15.6MB/s), io=292MiB (306MB), run=5001-5004msec 00:25:30.307 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:30.307 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:30.307 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:30.307 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:30.307 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:30.307 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:30.307 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.565 00:25:30.565 real 0m24.622s 00:25:30.565 user 4m32.157s 00:25:30.565 sys 0m7.248s 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:30.565 13:54:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:30.565 ************************************ 00:25:30.565 END TEST fio_dif_rand_params 00:25:30.565 ************************************ 00:25:30.566 13:54:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:30.566 13:54:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:30.566 13:54:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:30.566 13:54:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:30.566 ************************************ 00:25:30.566 START TEST fio_dif_digest 00:25:30.566 ************************************ 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:30.566 bdev_null0 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:30.566 [2024-07-25 13:54:27.459130] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:30.566 { 00:25:30.566 "params": { 00:25:30.566 "name": "Nvme$subsystem", 00:25:30.566 "trtype": "$TEST_TRANSPORT", 00:25:30.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.566 "adrfam": "ipv4", 00:25:30.566 "trsvcid": "$NVMF_PORT", 00:25:30.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.566 "hdgst": ${hdgst:-false}, 00:25:30.566 "ddgst": ${ddgst:-false} 00:25:30.566 }, 00:25:30.566 "method": "bdev_nvme_attach_controller" 00:25:30.566 } 00:25:30.566 EOF 00:25:30.566 )") 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:30.566 "params": { 00:25:30.566 "name": "Nvme0", 00:25:30.566 "trtype": "tcp", 00:25:30.566 "traddr": "10.0.0.2", 00:25:30.566 "adrfam": "ipv4", 00:25:30.566 "trsvcid": "4420", 00:25:30.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:30.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:30.566 "hdgst": true, 00:25:30.566 "ddgst": true 00:25:30.566 }, 00:25:30.566 "method": "bdev_nvme_attach_controller" 00:25:30.566 }' 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:30.566 13:54:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:30.824 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:30.824 ... 00:25:30.824 fio-3.35 00:25:30.824 Starting 3 threads 00:25:30.824 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.021 00:25:43.021 filename0: (groupid=0, jobs=1): err= 0: pid=681394: Thu Jul 25 13:54:38 2024 00:25:43.021 read: IOPS=206, BW=25.9MiB/s (27.1MB/s)(260MiB/10045msec) 00:25:43.021 slat (nsec): min=7999, max=94906, avg=13773.01, stdev=2916.41 00:25:43.021 clat (usec): min=9042, max=53799, avg=14457.66, stdev=1598.75 00:25:43.021 lat (usec): min=9056, max=53807, avg=14471.43, stdev=1598.78 00:25:43.021 clat percentiles (usec): 00:25:43.021 | 1.00th=[11207], 5.00th=[12649], 10.00th=[13173], 20.00th=[13566], 00:25:43.021 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:25:43.021 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:25:43.021 | 99.00th=[17171], 99.50th=[17957], 99.90th=[19006], 99.95th=[49021], 00:25:43.021 | 99.99th=[53740] 00:25:43.021 bw ( KiB/s): min=25856, max=28416, per=33.52%, avg=26575.40, stdev=539.90, samples=20 00:25:43.021 iops : min= 202, max= 222, avg=207.60, stdev= 4.24, samples=20 00:25:43.021 lat (msec) : 10=0.38%, 20=99.52%, 50=0.05%, 100=0.05% 00:25:43.021 cpu : usr=92.93%, sys=6.58%, ctx=28, majf=0, minf=239 00:25:43.021 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.022 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.022 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:43.022 filename0: (groupid=0, jobs=1): err= 0: pid=681395: Thu Jul 25 13:54:38 2024 00:25:43.022 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(255MiB/10044msec) 00:25:43.022 slat (nsec): min=8042, max=39194, avg=13697.33, stdev=2291.33 00:25:43.022 clat (usec): min=8809, max=54143, avg=14733.29, stdev=1556.76 00:25:43.022 lat (usec): min=8821, max=54156, avg=14746.98, stdev=1556.73 00:25:43.022 clat percentiles (usec): 00:25:43.022 | 1.00th=[11207], 5.00th=[13042], 10.00th=[13566], 20.00th=[13960], 00:25:43.022 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:25:43.022 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:25:43.022 | 99.00th=[17171], 99.50th=[17433], 99.90th=[20055], 99.95th=[45876], 00:25:43.022 | 99.99th=[54264] 00:25:43.022 bw ( KiB/s): min=25600, max=27648, per=32.90%, avg=26086.40, stdev=469.11, samples=20 00:25:43.022 iops : min= 200, max= 216, avg=203.80, stdev= 3.66, samples=20 00:25:43.022 lat (msec) : 10=0.59%, 20=99.26%, 50=0.10%, 100=0.05% 00:25:43.022 cpu : usr=92.61%, sys=6.91%, ctx=29, majf=0, minf=149 00:25:43.022 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.022 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.022 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:43.022 filename0: (groupid=0, jobs=1): err= 0: pid=681396: Thu Jul 25 13:54:38 2024 00:25:43.022 read: IOPS=209, BW=26.2MiB/s (27.4MB/s)(263MiB/10046msec) 00:25:43.022 slat (nsec): min=8027, max=42880, avg=14496.93, stdev=3361.96 00:25:43.022 clat (usec): min=10597, max=60234, avg=14273.21, stdev=2622.14 00:25:43.022 lat (usec): min=10609, max=60248, avg=14287.71, stdev=2622.15 00:25:43.022 clat percentiles (usec): 00:25:43.022 | 1.00th=[11863], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:25:43.022 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:25:43.022 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:25:43.022 | 99.00th=[16581], 99.50th=[17433], 99.90th=[56886], 99.95th=[60031], 00:25:43.022 | 99.99th=[60031] 00:25:43.022 bw ( KiB/s): min=23808, max=28160, per=33.94%, avg=26905.60, stdev=1107.81, samples=20 00:25:43.022 iops : min= 186, max= 220, avg=210.20, stdev= 8.65, samples=20 00:25:43.022 lat (msec) : 20=99.62%, 50=0.10%, 100=0.29% 00:25:43.022 cpu : usr=92.13%, sys=7.24%, ctx=42, majf=0, minf=131 00:25:43.022 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.022 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.022 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:43.022 00:25:43.022 Run status group 0 (all jobs): 00:25:43.022 READ: bw=77.4MiB/s (81.2MB/s), 25.4MiB/s-26.2MiB/s (26.6MB/s-27.4MB/s), io=778MiB (816MB), run=10044-10046msec 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.022 00:25:43.022 real 0m11.192s 00:25:43.022 user 0m29.050s 00:25:43.022 sys 0m2.358s 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:43.022 13:54:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:43.022 ************************************ 00:25:43.022 END TEST fio_dif_digest 00:25:43.022 ************************************ 00:25:43.022 13:54:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:43.022 13:54:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:43.022 rmmod nvme_tcp 00:25:43.022 rmmod nvme_fabrics 00:25:43.022 rmmod nvme_keyring 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 674598 ']' 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 674598 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 674598 ']' 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 674598 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 674598 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 674598' 00:25:43.022 killing process with pid 674598 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@969 -- # kill 674598 00:25:43.022 13:54:38 nvmf_dif -- common/autotest_common.sh@974 -- # wait 674598 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:25:43.022 13:54:38 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:43.281 Waiting for block devices as requested 00:25:43.281 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:25:43.281 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:43.539 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:43.539 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:43.539 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:43.799 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:43.799 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:43.799 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:43.799 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:44.057 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:44.057 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:44.057 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:44.315 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:44.315 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:44.315 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:44.315 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:44.575 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:44.575 13:54:41 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:44.575 13:54:41 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:44.575 13:54:41 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.575 13:54:41 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:44.575 13:54:41 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.575 13:54:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:44.575 13:54:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.113 13:54:43 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:47.113 00:25:47.113 real 1m7.549s 00:25:47.113 user 6m29.425s 00:25:47.113 sys 0m19.303s 00:25:47.113 13:54:43 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.113 13:54:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.113 ************************************ 00:25:47.113 END TEST nvmf_dif 00:25:47.113 ************************************ 00:25:47.113 13:54:43 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:47.113 13:54:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:47.113 13:54:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.113 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:25:47.113 ************************************ 00:25:47.113 START TEST nvmf_abort_qd_sizes 00:25:47.113 ************************************ 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:47.113 * Looking for test storage... 00:25:47.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.113 13:54:43 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:25:47.114 13:54:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:49.066 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:49.067 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:49.067 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:49.067 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:49.067 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:49.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:25:49.067 00:25:49.067 --- 10.0.0.2 ping statistics --- 00:25:49.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.067 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:25:49.067 00:25:49.067 --- 10.0.0.1 ping statistics --- 00:25:49.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.067 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:49.067 13:54:45 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:50.001 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:50.001 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:50.002 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:50.002 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:50.002 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:50.002 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:50.002 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:50.260 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:50.260 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:50.260 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:50.260 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:50.260 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:50.260 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:50.260 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:50.260 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:50.260 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:51.196 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:51.196 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.196 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:51.196 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:51.196 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.196 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=686311 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 686311 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 686311 ']' 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.197 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:51.197 [2024-07-25 13:54:48.140193] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:51.197 [2024-07-25 13:54:48.140267] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.197 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.197 [2024-07-25 13:54:48.201735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:51.454 [2024-07-25 13:54:48.308685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.454 [2024-07-25 13:54:48.308742] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.454 [2024-07-25 13:54:48.308770] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.454 [2024-07-25 13:54:48.308781] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.454 [2024-07-25 13:54:48.308791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.454 [2024-07-25 13:54:48.308840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.454 [2024-07-25 13:54:48.308896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:51.454 [2024-07-25 13:54:48.308964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:51.454 [2024-07-25 13:54:48.308966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:51.454 13:54:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:51.454 ************************************ 00:25:51.454 START TEST spdk_target_abort 00:25:51.454 ************************************ 00:25:51.454 13:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:25:51.454 13:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:51.454 13:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:25:51.454 13:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.454 13:54:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:54.729 spdk_targetn1 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:54.729 [2024-07-25 13:54:51.316095] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:54.729 [2024-07-25 13:54:51.348382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:54.729 13:54:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:54.729 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.001 Initializing NVMe Controllers 00:25:58.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:58.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:58.001 Initialization complete. Launching workers. 00:25:58.001 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14543, failed: 0 00:25:58.001 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1187, failed to submit 13356 00:25:58.001 success 783, unsuccess 404, failed 0 00:25:58.001 13:54:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:58.001 13:54:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:58.001 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.277 Initializing NVMe Controllers 00:26:01.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:01.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:01.277 Initialization complete. Launching workers. 00:26:01.277 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8517, failed: 0 00:26:01.277 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7290 00:26:01.277 success 358, unsuccess 869, failed 0 00:26:01.277 13:54:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:01.277 13:54:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:01.277 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.563 Initializing NVMe Controllers 00:26:04.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:04.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:04.563 Initialization complete. Launching workers. 00:26:04.563 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32087, failed: 0 00:26:04.563 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2737, failed to submit 29350 00:26:04.563 success 523, unsuccess 2214, failed 0 00:26:04.563 13:55:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:04.563 13:55:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.563 13:55:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:04.563 13:55:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.563 13:55:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:04.564 13:55:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.564 13:55:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 686311 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 686311 ']' 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 686311 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 686311 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 686311' 00:26:05.501 killing process with pid 686311 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 686311 00:26:05.501 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 686311 00:26:05.760 00:26:05.760 real 0m14.274s 00:26:05.760 user 0m54.176s 00:26:05.760 sys 0m2.445s 00:26:05.760 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:05.760 13:55:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:05.760 ************************************ 00:26:05.760 END TEST spdk_target_abort 00:26:05.760 ************************************ 00:26:05.760 13:55:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:05.760 13:55:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:05.760 13:55:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:05.760 13:55:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:05.760 ************************************ 00:26:05.760 START TEST kernel_target_abort 00:26:05.760 ************************************ 00:26:05.760 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:06.019 13:55:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:06.952 Waiting for block devices as requested 00:26:06.952 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:07.211 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:07.211 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:07.470 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:07.470 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:07.470 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:07.470 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:07.730 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:07.730 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:07.730 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:07.730 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:07.989 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:07.989 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:07.989 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:08.248 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:08.248 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:08.248 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:08.508 No valid GPT data, bailing 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:26:08.508 00:26:08.508 Discovery Log Number of Records 2, Generation counter 2 00:26:08.508 =====Discovery Log Entry 0====== 00:26:08.508 trtype: tcp 00:26:08.508 adrfam: ipv4 00:26:08.508 subtype: current discovery subsystem 00:26:08.508 treq: not specified, sq flow control disable supported 00:26:08.508 portid: 1 00:26:08.508 trsvcid: 4420 00:26:08.508 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:08.508 traddr: 10.0.0.1 00:26:08.508 eflags: none 00:26:08.508 sectype: none 00:26:08.508 =====Discovery Log Entry 1====== 00:26:08.508 trtype: tcp 00:26:08.508 adrfam: ipv4 00:26:08.508 subtype: nvme subsystem 00:26:08.508 treq: not specified, sq flow control disable supported 00:26:08.508 portid: 1 00:26:08.508 trsvcid: 4420 00:26:08.508 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:08.508 traddr: 10.0.0.1 00:26:08.508 eflags: none 00:26:08.508 sectype: none 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:08.508 13:55:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:08.508 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.798 Initializing NVMe Controllers 00:26:11.798 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:11.798 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:11.798 Initialization complete. Launching workers. 00:26:11.798 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55409, failed: 0 00:26:11.798 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55409, failed to submit 0 00:26:11.798 success 0, unsuccess 55409, failed 0 00:26:11.798 13:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:11.798 13:55:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:11.798 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.105 Initializing NVMe Controllers 00:26:15.105 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:15.105 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:15.105 Initialization complete. Launching workers. 00:26:15.106 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99574, failed: 0 00:26:15.106 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25102, failed to submit 74472 00:26:15.106 success 0, unsuccess 25102, failed 0 00:26:15.106 13:55:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:15.106 13:55:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:15.106 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.390 Initializing NVMe Controllers 00:26:18.390 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:18.390 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:18.390 Initialization complete. Launching workers. 00:26:18.390 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97074, failed: 0 00:26:18.390 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24270, failed to submit 72804 00:26:18.390 success 0, unsuccess 24270, failed 0 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:18.390 13:55:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:18.958 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:18.958 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:18.958 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:18.958 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:18.958 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:18.958 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:18.958 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:19.217 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:19.217 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:19.217 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:19.217 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:19.217 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:19.217 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:19.217 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:19.217 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:19.217 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:20.155 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:20.155 00:26:20.155 real 0m14.303s 00:26:20.155 user 0m6.592s 00:26:20.155 sys 0m3.032s 00:26:20.155 13:55:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:20.155 13:55:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:20.155 ************************************ 00:26:20.155 END TEST kernel_target_abort 00:26:20.155 ************************************ 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:20.155 rmmod nvme_tcp 00:26:20.155 rmmod nvme_fabrics 00:26:20.155 rmmod nvme_keyring 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 686311 ']' 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 686311 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 686311 ']' 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 686311 00:26:20.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (686311) - No such process 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 686311 is not found' 00:26:20.155 Process with pid 686311 is not found 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:20.155 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:21.531 Waiting for block devices as requested 00:26:21.531 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:21.531 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:21.790 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:21.790 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:21.790 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:22.048 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:22.048 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:22.048 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:22.048 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:22.305 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:22.305 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:22.305 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:22.305 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:22.563 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:22.563 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:22.563 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:22.821 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:22.821 13:55:19 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:22.821 13:55:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:22.821 13:55:19 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.821 13:55:19 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:22.821 13:55:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.821 13:55:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:22.821 13:55:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.358 13:55:21 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.358 00:26:25.358 real 0m38.193s 00:26:25.358 user 1m2.945s 00:26:25.358 sys 0m8.912s 00:26:25.358 13:55:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:25.358 13:55:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:25.358 ************************************ 00:26:25.358 END TEST nvmf_abort_qd_sizes 00:26:25.358 ************************************ 00:26:25.358 13:55:21 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:25.358 13:55:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:25.358 13:55:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:25.358 13:55:21 -- common/autotest_common.sh@10 -- # set +x 00:26:25.358 ************************************ 00:26:25.358 START TEST keyring_file 00:26:25.358 ************************************ 00:26:25.358 13:55:21 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:25.358 * Looking for test storage... 00:26:25.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:26:25.358 13:55:21 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:26:25.358 13:55:21 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.358 13:55:21 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.358 13:55:21 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.358 13:55:21 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.358 13:55:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.358 13:55:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.358 13:55:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.358 13:55:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:25.358 13:55:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@47 -- # : 0 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.358 13:55:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:25.358 13:55:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:25.358 13:55:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:25.358 13:55:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:25.358 13:55:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:25.358 13:55:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:25.358 13:55:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:25.358 13:55:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:25.358 13:55:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:25.358 13:55:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:25.358 13:55:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:25.358 13:55:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:25.358 13:55:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.30bdxXuVuC 00:26:25.358 13:55:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:25.358 13:55:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.30bdxXuVuC 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.30bdxXuVuC 00:26:25.359 13:55:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.30bdxXuVuC 00:26:25.359 13:55:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xsoM6IElBj 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:25.359 13:55:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xsoM6IElBj 00:26:25.359 13:55:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xsoM6IElBj 00:26:25.359 13:55:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.xsoM6IElBj 00:26:25.359 13:55:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=692073 00:26:25.359 13:55:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:26:25.359 13:55:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 692073 00:26:25.359 13:55:21 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 692073 ']' 00:26:25.359 13:55:21 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.359 13:55:21 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.359 13:55:21 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.359 13:55:21 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.359 13:55:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:25.359 [2024-07-25 13:55:22.031707] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:25.359 [2024-07-25 13:55:22.031799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692073 ] 00:26:25.359 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.359 [2024-07-25 13:55:22.088208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.359 [2024-07-25 13:55:22.190147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.617 13:55:22 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.617 13:55:22 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:26:25.617 13:55:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:25.617 13:55:22 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.617 13:55:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:25.617 [2024-07-25 13:55:22.442077] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.617 null0 00:26:25.617 [2024-07-25 13:55:22.474138] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:25.617 [2024-07-25 13:55:22.474573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:25.617 [2024-07-25 13:55:22.482142] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.618 13:55:22 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:25.618 [2024-07-25 13:55:22.490156] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:25.618 request: 00:26:25.618 { 00:26:25.618 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:25.618 "secure_channel": false, 00:26:25.618 "listen_address": { 00:26:25.618 "trtype": "tcp", 00:26:25.618 "traddr": "127.0.0.1", 00:26:25.618 "trsvcid": "4420" 00:26:25.618 }, 00:26:25.618 "method": "nvmf_subsystem_add_listener", 00:26:25.618 "req_id": 1 00:26:25.618 } 00:26:25.618 Got JSON-RPC error response 00:26:25.618 response: 00:26:25.618 { 00:26:25.618 "code": -32602, 00:26:25.618 "message": "Invalid parameters" 00:26:25.618 } 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:25.618 13:55:22 keyring_file -- keyring/file.sh@46 -- # bperfpid=692082 00:26:25.618 13:55:22 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:25.618 13:55:22 keyring_file -- keyring/file.sh@48 -- # waitforlisten 692082 /var/tmp/bperf.sock 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 692082 ']' 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.618 13:55:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:25.618 [2024-07-25 13:55:22.533957] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:25.618 [2024-07-25 13:55:22.534020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692082 ] 00:26:25.618 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.618 [2024-07-25 13:55:22.589494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.875 [2024-07-25 13:55:22.705394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.875 13:55:22 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.875 13:55:22 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:26:25.876 13:55:22 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.30bdxXuVuC 00:26:25.876 13:55:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.30bdxXuVuC 00:26:26.134 13:55:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xsoM6IElBj 00:26:26.134 13:55:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xsoM6IElBj 00:26:26.392 13:55:23 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:26:26.392 13:55:23 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:26:26.392 13:55:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:26.392 13:55:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:26.392 13:55:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:26.649 13:55:23 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.30bdxXuVuC == \/\t\m\p\/\t\m\p\.\3\0\b\d\x\X\u\V\u\C ]] 00:26:26.649 13:55:23 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:26:26.649 13:55:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:26.649 13:55:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:26.649 13:55:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:26.649 13:55:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:26.907 13:55:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.xsoM6IElBj == \/\t\m\p\/\t\m\p\.\x\s\o\M\6\I\E\l\B\j ]] 00:26:26.907 13:55:23 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:26:26.907 13:55:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:26.907 13:55:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:26.907 13:55:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:26.907 13:55:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:26.907 13:55:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:27.164 13:55:24 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:26:27.164 13:55:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:26:27.164 13:55:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:27.164 13:55:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:27.164 13:55:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:27.164 13:55:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:27.164 13:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:27.422 13:55:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:27.422 13:55:24 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:27.422 13:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:27.680 [2024-07-25 13:55:24.534612] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:27.680 nvme0n1 00:26:27.680 13:55:24 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:26:27.680 13:55:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:27.680 13:55:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:27.680 13:55:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:27.680 13:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:27.680 13:55:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:27.938 13:55:24 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:26:27.938 13:55:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:26:27.938 13:55:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:27.938 13:55:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:27.938 13:55:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:27.938 13:55:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:27.938 13:55:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:28.196 13:55:25 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:26:28.196 13:55:25 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:28.456 Running I/O for 1 seconds... 00:26:29.394 00:26:29.394 Latency(us) 00:26:29.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.394 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:29.394 nvme0n1 : 1.01 10007.94 39.09 0.00 0.00 12739.02 4029.25 19806.44 00:26:29.394 =================================================================================================================== 00:26:29.394 Total : 10007.94 39.09 0.00 0.00 12739.02 4029.25 19806.44 00:26:29.394 0 00:26:29.394 13:55:26 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:29.394 13:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:29.652 13:55:26 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:26:29.652 13:55:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:29.652 13:55:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:29.652 13:55:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:29.652 13:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:29.652 13:55:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:29.910 13:55:26 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:26:29.910 13:55:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:26:29.910 13:55:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:29.910 13:55:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:29.910 13:55:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:29.910 13:55:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:29.910 13:55:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:30.168 13:55:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:30.168 13:55:27 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:30.168 13:55:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:30.168 13:55:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:30.168 13:55:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:30.168 13:55:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.168 13:55:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:30.168 13:55:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:30.168 13:55:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:30.168 13:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:30.426 [2024-07-25 13:55:27.272708] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:30.426 [2024-07-25 13:55:27.273291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256e9a0 (107): Transport endpoint is not connected 00:26:30.426 [2024-07-25 13:55:27.274282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x256e9a0 (9): Bad file descriptor 00:26:30.426 [2024-07-25 13:55:27.275281] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:30.426 [2024-07-25 13:55:27.275308] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:30.426 [2024-07-25 13:55:27.275323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:30.426 request: 00:26:30.426 { 00:26:30.426 "name": "nvme0", 00:26:30.426 "trtype": "tcp", 00:26:30.426 "traddr": "127.0.0.1", 00:26:30.426 "adrfam": "ipv4", 00:26:30.426 "trsvcid": "4420", 00:26:30.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:30.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:30.426 "prchk_reftag": false, 00:26:30.426 "prchk_guard": false, 00:26:30.426 "hdgst": false, 00:26:30.426 "ddgst": false, 00:26:30.426 "psk": "key1", 00:26:30.426 "method": "bdev_nvme_attach_controller", 00:26:30.426 "req_id": 1 00:26:30.426 } 00:26:30.426 Got JSON-RPC error response 00:26:30.426 response: 00:26:30.426 { 00:26:30.426 "code": -5, 00:26:30.426 "message": "Input/output error" 00:26:30.426 } 00:26:30.426 13:55:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:30.426 13:55:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.426 13:55:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.426 13:55:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.426 13:55:27 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:26:30.426 13:55:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:30.426 13:55:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:30.426 13:55:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:30.426 13:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:30.426 13:55:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:30.684 13:55:27 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:26:30.684 13:55:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:26:30.684 13:55:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:30.684 13:55:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:30.684 13:55:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:30.684 13:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:30.684 13:55:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:30.942 13:55:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:30.942 13:55:27 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:26:30.942 13:55:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:31.200 13:55:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:26:31.200 13:55:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:31.458 13:55:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:26:31.458 13:55:28 keyring_file -- keyring/file.sh@77 -- # jq length 00:26:31.458 13:55:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:31.739 13:55:28 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:26:31.739 13:55:28 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.30bdxXuVuC 00:26:31.739 13:55:28 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.30bdxXuVuC 00:26:31.739 13:55:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:31.739 13:55:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.30bdxXuVuC 00:26:31.739 13:55:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:31.739 13:55:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.739 13:55:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:31.739 13:55:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.739 13:55:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.30bdxXuVuC 00:26:31.739 13:55:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.30bdxXuVuC 00:26:31.997 [2024-07-25 13:55:28.785457] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.30bdxXuVuC': 0100660 00:26:31.997 [2024-07-25 13:55:28.785491] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:31.997 request: 00:26:31.997 { 00:26:31.997 "name": "key0", 00:26:31.997 "path": "/tmp/tmp.30bdxXuVuC", 00:26:31.997 "method": "keyring_file_add_key", 00:26:31.997 "req_id": 1 00:26:31.997 } 00:26:31.997 Got JSON-RPC error response 00:26:31.997 response: 00:26:31.997 { 00:26:31.997 "code": -1, 00:26:31.997 "message": "Operation not permitted" 00:26:31.997 } 00:26:31.997 13:55:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:31.997 13:55:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:31.997 13:55:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:31.997 13:55:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:31.997 13:55:28 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.30bdxXuVuC 00:26:31.997 13:55:28 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.30bdxXuVuC 00:26:31.997 13:55:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.30bdxXuVuC 00:26:32.256 13:55:29 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.30bdxXuVuC 00:26:32.256 13:55:29 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:26:32.256 13:55:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:32.256 13:55:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:32.256 13:55:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:32.256 13:55:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:32.256 13:55:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:32.514 13:55:29 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:26:32.514 13:55:29 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:32.514 13:55:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:26:32.514 13:55:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:32.514 13:55:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:32.514 13:55:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:32.514 13:55:29 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:32.514 13:55:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:32.514 13:55:29 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:32.514 13:55:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:32.514 [2024-07-25 13:55:29.535536] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.30bdxXuVuC': No such file or directory 00:26:32.514 [2024-07-25 13:55:29.535569] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:32.514 [2024-07-25 13:55:29.535599] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:32.514 [2024-07-25 13:55:29.535609] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:32.514 [2024-07-25 13:55:29.535620] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:32.514 request: 00:26:32.514 { 00:26:32.514 "name": "nvme0", 00:26:32.514 "trtype": "tcp", 00:26:32.514 "traddr": "127.0.0.1", 00:26:32.514 "adrfam": "ipv4", 00:26:32.514 "trsvcid": "4420", 00:26:32.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:32.514 "prchk_reftag": false, 00:26:32.514 "prchk_guard": false, 00:26:32.514 "hdgst": false, 00:26:32.514 "ddgst": false, 00:26:32.514 "psk": "key0", 00:26:32.514 "method": "bdev_nvme_attach_controller", 00:26:32.514 "req_id": 1 00:26:32.514 } 00:26:32.514 Got JSON-RPC error response 00:26:32.514 response: 00:26:32.514 { 00:26:32.514 "code": -19, 00:26:32.514 "message": "No such device" 00:26:32.514 } 00:26:32.773 13:55:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:26:32.773 13:55:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:32.773 13:55:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:32.773 13:55:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:32.773 13:55:29 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:26:32.773 13:55:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:32.773 13:55:29 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:32.773 13:55:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:32.773 13:55:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:32.773 13:55:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:32.773 13:55:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:32.773 13:55:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:32.773 13:55:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SOHyKJsL2u 00:26:32.773 13:55:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:32.773 13:55:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:32.773 13:55:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:32.773 13:55:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:32.773 13:55:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:32.773 13:55:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:32.773 13:55:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:33.031 13:55:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SOHyKJsL2u 00:26:33.031 13:55:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SOHyKJsL2u 00:26:33.031 13:55:29 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.SOHyKJsL2u 00:26:33.031 13:55:29 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SOHyKJsL2u 00:26:33.031 13:55:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SOHyKJsL2u 00:26:33.289 13:55:30 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:33.289 13:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:33.546 nvme0n1 00:26:33.546 13:55:30 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:26:33.546 13:55:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:33.546 13:55:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:33.546 13:55:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:33.546 13:55:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:33.546 13:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:33.804 13:55:30 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:26:33.804 13:55:30 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:26:33.804 13:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:34.060 13:55:30 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:26:34.060 13:55:30 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:26:34.060 13:55:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:34.060 13:55:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:34.060 13:55:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:34.317 13:55:31 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:26:34.317 13:55:31 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:26:34.317 13:55:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:34.317 13:55:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:34.317 13:55:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:34.317 13:55:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:34.317 13:55:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:34.575 13:55:31 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:26:34.575 13:55:31 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:34.575 13:55:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:34.832 13:55:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:26:34.832 13:55:31 keyring_file -- keyring/file.sh@104 -- # jq length 00:26:34.832 13:55:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:35.089 13:55:31 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:26:35.089 13:55:31 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SOHyKJsL2u 00:26:35.089 13:55:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SOHyKJsL2u 00:26:35.361 13:55:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xsoM6IElBj 00:26:35.361 13:55:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xsoM6IElBj 00:26:35.619 13:55:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:35.619 13:55:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:35.877 nvme0n1 00:26:35.877 13:55:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:26:35.877 13:55:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:36.135 13:55:33 keyring_file -- keyring/file.sh@112 -- # config='{ 00:26:36.135 "subsystems": [ 00:26:36.135 { 00:26:36.135 "subsystem": "keyring", 00:26:36.135 "config": [ 00:26:36.135 { 00:26:36.135 "method": "keyring_file_add_key", 00:26:36.135 "params": { 00:26:36.135 "name": "key0", 00:26:36.135 "path": "/tmp/tmp.SOHyKJsL2u" 00:26:36.135 } 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "method": "keyring_file_add_key", 00:26:36.135 "params": { 00:26:36.135 "name": "key1", 00:26:36.135 "path": "/tmp/tmp.xsoM6IElBj" 00:26:36.135 } 00:26:36.135 } 00:26:36.135 ] 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "subsystem": "iobuf", 00:26:36.135 "config": [ 00:26:36.135 { 00:26:36.135 "method": "iobuf_set_options", 00:26:36.135 "params": { 00:26:36.135 "small_pool_count": 8192, 00:26:36.135 "large_pool_count": 1024, 00:26:36.135 "small_bufsize": 8192, 00:26:36.135 "large_bufsize": 135168 00:26:36.135 } 00:26:36.135 } 00:26:36.135 ] 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "subsystem": "sock", 00:26:36.135 "config": [ 00:26:36.135 { 00:26:36.135 "method": "sock_set_default_impl", 00:26:36.135 "params": { 00:26:36.135 "impl_name": "posix" 00:26:36.135 } 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "method": "sock_impl_set_options", 00:26:36.135 "params": { 00:26:36.135 "impl_name": "ssl", 00:26:36.135 "recv_buf_size": 4096, 00:26:36.135 "send_buf_size": 4096, 00:26:36.135 "enable_recv_pipe": true, 00:26:36.135 "enable_quickack": false, 00:26:36.135 "enable_placement_id": 0, 00:26:36.135 "enable_zerocopy_send_server": true, 00:26:36.135 "enable_zerocopy_send_client": false, 00:26:36.135 "zerocopy_threshold": 0, 00:26:36.135 "tls_version": 0, 00:26:36.135 "enable_ktls": false 00:26:36.135 } 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "method": "sock_impl_set_options", 00:26:36.135 "params": { 00:26:36.135 "impl_name": "posix", 00:26:36.135 "recv_buf_size": 2097152, 00:26:36.135 "send_buf_size": 2097152, 00:26:36.135 "enable_recv_pipe": true, 00:26:36.135 "enable_quickack": false, 00:26:36.135 "enable_placement_id": 0, 00:26:36.135 "enable_zerocopy_send_server": true, 00:26:36.135 "enable_zerocopy_send_client": false, 00:26:36.135 "zerocopy_threshold": 0, 00:26:36.135 "tls_version": 0, 00:26:36.135 "enable_ktls": false 00:26:36.135 } 00:26:36.135 } 00:26:36.135 ] 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "subsystem": "vmd", 00:26:36.135 "config": [] 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "subsystem": "accel", 00:26:36.135 "config": [ 00:26:36.135 { 00:26:36.135 "method": "accel_set_options", 00:26:36.135 "params": { 00:26:36.135 "small_cache_size": 128, 00:26:36.135 "large_cache_size": 16, 00:26:36.135 "task_count": 2048, 00:26:36.135 "sequence_count": 2048, 00:26:36.135 "buf_count": 2048 00:26:36.135 } 00:26:36.135 } 00:26:36.135 ] 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "subsystem": "bdev", 00:26:36.135 "config": [ 00:26:36.135 { 00:26:36.135 "method": "bdev_set_options", 00:26:36.135 "params": { 00:26:36.135 "bdev_io_pool_size": 65535, 00:26:36.135 "bdev_io_cache_size": 256, 00:26:36.135 "bdev_auto_examine": true, 00:26:36.135 "iobuf_small_cache_size": 128, 00:26:36.135 "iobuf_large_cache_size": 16 00:26:36.135 } 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "method": "bdev_raid_set_options", 00:26:36.135 "params": { 00:26:36.135 "process_window_size_kb": 1024, 00:26:36.135 "process_max_bandwidth_mb_sec": 0 00:26:36.135 } 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "method": "bdev_iscsi_set_options", 00:26:36.135 "params": { 00:26:36.135 "timeout_sec": 30 00:26:36.135 } 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "method": "bdev_nvme_set_options", 00:26:36.135 "params": { 00:26:36.135 "action_on_timeout": "none", 00:26:36.135 "timeout_us": 0, 00:26:36.135 "timeout_admin_us": 0, 00:26:36.135 "keep_alive_timeout_ms": 10000, 00:26:36.135 "arbitration_burst": 0, 00:26:36.135 "low_priority_weight": 0, 00:26:36.135 "medium_priority_weight": 0, 00:26:36.135 "high_priority_weight": 0, 00:26:36.135 "nvme_adminq_poll_period_us": 10000, 00:26:36.135 "nvme_ioq_poll_period_us": 0, 00:26:36.135 "io_queue_requests": 512, 00:26:36.135 "delay_cmd_submit": true, 00:26:36.135 "transport_retry_count": 4, 00:26:36.135 "bdev_retry_count": 3, 00:26:36.135 "transport_ack_timeout": 0, 00:26:36.135 "ctrlr_loss_timeout_sec": 0, 00:26:36.135 "reconnect_delay_sec": 0, 00:26:36.135 "fast_io_fail_timeout_sec": 0, 00:26:36.135 "disable_auto_failback": false, 00:26:36.135 "generate_uuids": false, 00:26:36.135 "transport_tos": 0, 00:26:36.135 "nvme_error_stat": false, 00:26:36.135 "rdma_srq_size": 0, 00:26:36.135 "io_path_stat": false, 00:26:36.135 "allow_accel_sequence": false, 00:26:36.135 "rdma_max_cq_size": 0, 00:26:36.135 "rdma_cm_event_timeout_ms": 0, 00:26:36.135 "dhchap_digests": [ 00:26:36.135 "sha256", 00:26:36.135 "sha384", 00:26:36.135 "sha512" 00:26:36.135 ], 00:26:36.135 "dhchap_dhgroups": [ 00:26:36.135 "null", 00:26:36.135 "ffdhe2048", 00:26:36.135 "ffdhe3072", 00:26:36.135 "ffdhe4096", 00:26:36.135 "ffdhe6144", 00:26:36.135 "ffdhe8192" 00:26:36.135 ] 00:26:36.135 } 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "method": "bdev_nvme_attach_controller", 00:26:36.135 "params": { 00:26:36.135 "name": "nvme0", 00:26:36.135 "trtype": "TCP", 00:26:36.135 "adrfam": "IPv4", 00:26:36.135 "traddr": "127.0.0.1", 00:26:36.135 "trsvcid": "4420", 00:26:36.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:36.135 "prchk_reftag": false, 00:26:36.135 "prchk_guard": false, 00:26:36.135 "ctrlr_loss_timeout_sec": 0, 00:26:36.135 "reconnect_delay_sec": 0, 00:26:36.135 "fast_io_fail_timeout_sec": 0, 00:26:36.135 "psk": "key0", 00:26:36.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:36.135 "hdgst": false, 00:26:36.135 "ddgst": false 00:26:36.135 } 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "method": "bdev_nvme_set_hotplug", 00:26:36.135 "params": { 00:26:36.135 "period_us": 100000, 00:26:36.135 "enable": false 00:26:36.135 } 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "method": "bdev_wait_for_examine" 00:26:36.135 } 00:26:36.135 ] 00:26:36.135 }, 00:26:36.135 { 00:26:36.135 "subsystem": "nbd", 00:26:36.135 "config": [] 00:26:36.135 } 00:26:36.135 ] 00:26:36.135 }' 00:26:36.135 13:55:33 keyring_file -- keyring/file.sh@114 -- # killprocess 692082 00:26:36.135 13:55:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 692082 ']' 00:26:36.135 13:55:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 692082 00:26:36.135 13:55:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:26:36.135 13:55:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.135 13:55:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 692082 00:26:36.135 13:55:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:36.136 13:55:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:36.136 13:55:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 692082' 00:26:36.136 killing process with pid 692082 00:26:36.136 13:55:33 keyring_file -- common/autotest_common.sh@969 -- # kill 692082 00:26:36.136 Received shutdown signal, test time was about 1.000000 seconds 00:26:36.136 00:26:36.136 Latency(us) 00:26:36.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.136 =================================================================================================================== 00:26:36.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:36.136 13:55:33 keyring_file -- common/autotest_common.sh@974 -- # wait 692082 00:26:36.394 13:55:33 keyring_file -- keyring/file.sh@117 -- # bperfpid=693543 00:26:36.394 13:55:33 keyring_file -- keyring/file.sh@119 -- # waitforlisten 693543 /var/tmp/bperf.sock 00:26:36.394 13:55:33 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 693543 ']' 00:26:36.394 13:55:33 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:36.394 13:55:33 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.394 13:55:33 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:36.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:36.394 13:55:33 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:36.394 13:55:33 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.394 13:55:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:36.394 13:55:33 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:26:36.394 "subsystems": [ 00:26:36.394 { 00:26:36.394 "subsystem": "keyring", 00:26:36.394 "config": [ 00:26:36.394 { 00:26:36.394 "method": "keyring_file_add_key", 00:26:36.394 "params": { 00:26:36.394 "name": "key0", 00:26:36.394 "path": "/tmp/tmp.SOHyKJsL2u" 00:26:36.394 } 00:26:36.394 }, 00:26:36.394 { 00:26:36.394 "method": "keyring_file_add_key", 00:26:36.394 "params": { 00:26:36.394 "name": "key1", 00:26:36.394 "path": "/tmp/tmp.xsoM6IElBj" 00:26:36.394 } 00:26:36.394 } 00:26:36.394 ] 00:26:36.394 }, 00:26:36.394 { 00:26:36.394 "subsystem": "iobuf", 00:26:36.394 "config": [ 00:26:36.394 { 00:26:36.395 "method": "iobuf_set_options", 00:26:36.395 "params": { 00:26:36.395 "small_pool_count": 8192, 00:26:36.395 "large_pool_count": 1024, 00:26:36.395 "small_bufsize": 8192, 00:26:36.395 "large_bufsize": 135168 00:26:36.395 } 00:26:36.395 } 00:26:36.395 ] 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "subsystem": "sock", 00:26:36.395 "config": [ 00:26:36.395 { 00:26:36.395 "method": "sock_set_default_impl", 00:26:36.395 "params": { 00:26:36.395 "impl_name": "posix" 00:26:36.395 } 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "method": "sock_impl_set_options", 00:26:36.395 "params": { 00:26:36.395 "impl_name": "ssl", 00:26:36.395 "recv_buf_size": 4096, 00:26:36.395 "send_buf_size": 4096, 00:26:36.395 "enable_recv_pipe": true, 00:26:36.395 "enable_quickack": false, 00:26:36.395 "enable_placement_id": 0, 00:26:36.395 "enable_zerocopy_send_server": true, 00:26:36.395 "enable_zerocopy_send_client": false, 00:26:36.395 "zerocopy_threshold": 0, 00:26:36.395 "tls_version": 0, 00:26:36.395 "enable_ktls": false 00:26:36.395 } 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "method": "sock_impl_set_options", 00:26:36.395 "params": { 00:26:36.395 "impl_name": "posix", 00:26:36.395 "recv_buf_size": 2097152, 00:26:36.395 "send_buf_size": 2097152, 00:26:36.395 "enable_recv_pipe": true, 00:26:36.395 "enable_quickack": false, 00:26:36.395 "enable_placement_id": 0, 00:26:36.395 "enable_zerocopy_send_server": true, 00:26:36.395 "enable_zerocopy_send_client": false, 00:26:36.395 "zerocopy_threshold": 0, 00:26:36.395 "tls_version": 0, 00:26:36.395 "enable_ktls": false 00:26:36.395 } 00:26:36.395 } 00:26:36.395 ] 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "subsystem": "vmd", 00:26:36.395 "config": [] 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "subsystem": "accel", 00:26:36.395 "config": [ 00:26:36.395 { 00:26:36.395 "method": "accel_set_options", 00:26:36.395 "params": { 00:26:36.395 "small_cache_size": 128, 00:26:36.395 "large_cache_size": 16, 00:26:36.395 "task_count": 2048, 00:26:36.395 "sequence_count": 2048, 00:26:36.395 "buf_count": 2048 00:26:36.395 } 00:26:36.395 } 00:26:36.395 ] 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "subsystem": "bdev", 00:26:36.395 "config": [ 00:26:36.395 { 00:26:36.395 "method": "bdev_set_options", 00:26:36.395 "params": { 00:26:36.395 "bdev_io_pool_size": 65535, 00:26:36.395 "bdev_io_cache_size": 256, 00:26:36.395 "bdev_auto_examine": true, 00:26:36.395 "iobuf_small_cache_size": 128, 00:26:36.395 "iobuf_large_cache_size": 16 00:26:36.395 } 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "method": "bdev_raid_set_options", 00:26:36.395 "params": { 00:26:36.395 "process_window_size_kb": 1024, 00:26:36.395 "process_max_bandwidth_mb_sec": 0 00:26:36.395 } 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "method": "bdev_iscsi_set_options", 00:26:36.395 "params": { 00:26:36.395 "timeout_sec": 30 00:26:36.395 } 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "method": "bdev_nvme_set_options", 00:26:36.395 "params": { 00:26:36.395 "action_on_timeout": "none", 00:26:36.395 "timeout_us": 0, 00:26:36.395 "timeout_admin_us": 0, 00:26:36.395 "keep_alive_timeout_ms": 10000, 00:26:36.395 "arbitration_burst": 0, 00:26:36.395 "low_priority_weight": 0, 00:26:36.395 "medium_priority_weight": 0, 00:26:36.395 "high_priority_weight": 0, 00:26:36.395 "nvme_adminq_poll_period_us": 10000, 00:26:36.395 "nvme_ioq_poll_period_us": 0, 00:26:36.395 "io_queue_requests": 512, 00:26:36.395 "delay_cmd_submit": true, 00:26:36.395 "transport_retry_count": 4, 00:26:36.395 "bdev_retry_count": 3, 00:26:36.395 "transport_ack_timeout": 0, 00:26:36.395 "ctrlr_loss_timeout_sec": 0, 00:26:36.395 "reconnect_delay_sec": 0, 00:26:36.395 "fast_io_fail_timeout_sec": 0, 00:26:36.395 "disable_auto_failback": false, 00:26:36.395 "generate_uuids": false, 00:26:36.395 "transport_tos": 0, 00:26:36.395 "nvme_error_stat": false, 00:26:36.395 "rdma_srq_size": 0, 00:26:36.395 "io_path_stat": false, 00:26:36.395 "allow_accel_sequence": false, 00:26:36.395 "rdma_max_cq_size": 0, 00:26:36.395 "rdma_cm_event_timeout_ms": 0, 00:26:36.395 "dhchap_digests": [ 00:26:36.395 "sha256", 00:26:36.395 "sha384", 00:26:36.395 "sha512" 00:26:36.395 ], 00:26:36.395 "dhchap_dhgroups": [ 00:26:36.395 "null", 00:26:36.395 "ffdhe2048", 00:26:36.395 "ffdhe3072", 00:26:36.395 "ffdhe4096", 00:26:36.395 "ffdhe6144", 00:26:36.395 "ffdhe8192" 00:26:36.395 ] 00:26:36.395 } 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "method": "bdev_nvme_attach_controller", 00:26:36.395 "params": { 00:26:36.395 "name": "nvme0", 00:26:36.395 "trtype": "TCP", 00:26:36.395 "adrfam": "IPv4", 00:26:36.395 "traddr": "127.0.0.1", 00:26:36.395 "trsvcid": "4420", 00:26:36.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:36.395 "prchk_reftag": false, 00:26:36.395 "prchk_guard": false, 00:26:36.395 "ctrlr_loss_timeout_sec": 0, 00:26:36.395 "reconnect_delay_sec": 0, 00:26:36.395 "fast_io_fail_timeout_sec": 0, 00:26:36.395 "psk": "key0", 00:26:36.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:36.395 "hdgst": false, 00:26:36.395 "ddgst": false 00:26:36.395 } 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "method": "bdev_nvme_set_hotplug", 00:26:36.395 "params": { 00:26:36.395 "period_us": 100000, 00:26:36.395 "enable": false 00:26:36.395 } 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "method": "bdev_wait_for_examine" 00:26:36.395 } 00:26:36.395 ] 00:26:36.395 }, 00:26:36.395 { 00:26:36.395 "subsystem": "nbd", 00:26:36.395 "config": [] 00:26:36.395 } 00:26:36.395 ] 00:26:36.395 }' 00:26:36.395 [2024-07-25 13:55:33.366591] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:36.395 [2024-07-25 13:55:33.366669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693543 ] 00:26:36.395 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.395 [2024-07-25 13:55:33.422581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.653 [2024-07-25 13:55:33.527971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.911 [2024-07-25 13:55:33.713244] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:37.477 13:55:34 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:37.477 13:55:34 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:26:37.477 13:55:34 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:26:37.477 13:55:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:37.477 13:55:34 keyring_file -- keyring/file.sh@120 -- # jq length 00:26:37.735 13:55:34 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:26:37.735 13:55:34 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:26:37.735 13:55:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:37.735 13:55:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:37.735 13:55:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:37.735 13:55:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:37.735 13:55:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:37.993 13:55:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:37.993 13:55:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:26:37.993 13:55:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:37.993 13:55:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:37.993 13:55:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:37.993 13:55:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:37.993 13:55:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:38.251 13:55:35 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:26:38.251 13:55:35 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:26:38.251 13:55:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:38.251 13:55:35 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:26:38.510 13:55:35 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:26:38.510 13:55:35 keyring_file -- keyring/file.sh@1 -- # cleanup 00:26:38.510 13:55:35 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.SOHyKJsL2u /tmp/tmp.xsoM6IElBj 00:26:38.510 13:55:35 keyring_file -- keyring/file.sh@20 -- # killprocess 693543 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 693543 ']' 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@954 -- # kill -0 693543 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@955 -- # uname 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 693543 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 693543' 00:26:38.510 killing process with pid 693543 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@969 -- # kill 693543 00:26:38.510 Received shutdown signal, test time was about 1.000000 seconds 00:26:38.510 00:26:38.510 Latency(us) 00:26:38.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.510 =================================================================================================================== 00:26:38.510 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:38.510 13:55:35 keyring_file -- common/autotest_common.sh@974 -- # wait 693543 00:26:38.770 13:55:35 keyring_file -- keyring/file.sh@21 -- # killprocess 692073 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 692073 ']' 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@954 -- # kill -0 692073 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@955 -- # uname 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 692073 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 692073' 00:26:38.770 killing process with pid 692073 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@969 -- # kill 692073 00:26:38.770 [2024-07-25 13:55:35.623401] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:38.770 13:55:35 keyring_file -- common/autotest_common.sh@974 -- # wait 692073 00:26:39.338 00:26:39.338 real 0m14.239s 00:26:39.338 user 0m35.636s 00:26:39.338 sys 0m3.303s 00:26:39.338 13:55:36 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:39.338 13:55:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:39.338 ************************************ 00:26:39.338 END TEST keyring_file 00:26:39.338 ************************************ 00:26:39.338 13:55:36 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:26:39.338 13:55:36 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:26:39.338 13:55:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:39.338 13:55:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:39.338 13:55:36 -- common/autotest_common.sh@10 -- # set +x 00:26:39.338 ************************************ 00:26:39.338 START TEST keyring_linux 00:26:39.338 ************************************ 00:26:39.338 13:55:36 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:26:39.338 * Looking for test storage... 00:26:39.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:26:39.338 13:55:36 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.338 13:55:36 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.338 13:55:36 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.338 13:55:36 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.338 13:55:36 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.338 13:55:36 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.338 13:55:36 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.338 13:55:36 keyring_linux -- paths/export.sh@5 -- # export PATH 00:26:39.338 13:55:36 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:39.338 13:55:36 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:39.338 13:55:36 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:39.338 13:55:36 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:26:39.338 13:55:36 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:26:39.338 13:55:36 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:26:39.338 13:55:36 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@705 -- # python - 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:26:39.338 /tmp/:spdk-test:key0 00:26:39.338 13:55:36 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:26:39.338 13:55:36 keyring_linux -- nvmf/common.sh@705 -- # python - 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:26:39.338 13:55:36 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:26:39.338 /tmp/:spdk-test:key1 00:26:39.339 13:55:36 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=693907 00:26:39.339 13:55:36 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:26:39.339 13:55:36 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 693907 00:26:39.339 13:55:36 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 693907 ']' 00:26:39.339 13:55:36 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.339 13:55:36 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:39.339 13:55:36 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.339 13:55:36 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:39.339 13:55:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:39.339 [2024-07-25 13:55:36.327598] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:39.339 [2024-07-25 13:55:36.327682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693907 ] 00:26:39.339 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.596 [2024-07-25 13:55:36.388179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.596 [2024-07-25 13:55:36.490808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:26:39.854 13:55:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:39.854 [2024-07-25 13:55:36.724947] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.854 null0 00:26:39.854 [2024-07-25 13:55:36.757000] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:39.854 [2024-07-25 13:55:36.757476] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.854 13:55:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:26:39.854 1034387856 00:26:39.854 13:55:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:26:39.854 742005023 00:26:39.854 13:55:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=694035 00:26:39.854 13:55:36 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:26:39.854 13:55:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 694035 /var/tmp/bperf.sock 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 694035 ']' 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:39.854 13:55:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:39.854 [2024-07-25 13:55:36.818899] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:39.854 [2024-07-25 13:55:36.818979] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid694035 ] 00:26:39.854 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.854 [2024-07-25 13:55:36.874196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.112 [2024-07-25 13:55:36.979748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.112 13:55:37 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:40.112 13:55:37 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:26:40.112 13:55:37 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:26:40.112 13:55:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:26:40.371 13:55:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:26:40.371 13:55:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:40.629 13:55:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:40.629 13:55:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:40.887 [2024-07-25 13:55:37.826409] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:40.887 nvme0n1 00:26:40.887 13:55:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:26:40.887 13:55:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:26:40.887 13:55:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:40.887 13:55:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:40.887 13:55:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:40.887 13:55:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:41.145 13:55:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:26:41.145 13:55:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:41.145 13:55:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:26:41.145 13:55:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:26:41.145 13:55:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:41.145 13:55:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:41.145 13:55:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:26:41.403 13:55:38 keyring_linux -- keyring/linux.sh@25 -- # sn=1034387856 00:26:41.403 13:55:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:26:41.403 13:55:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:41.403 13:55:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 1034387856 == \1\0\3\4\3\8\7\8\5\6 ]] 00:26:41.403 13:55:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1034387856 00:26:41.403 13:55:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:26:41.403 13:55:38 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:41.661 Running I/O for 1 seconds... 00:26:42.594 00:26:42.594 Latency(us) 00:26:42.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.594 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:42.594 nvme0n1 : 1.01 10773.55 42.08 0.00 0.00 11802.79 10145.94 23010.42 00:26:42.594 =================================================================================================================== 00:26:42.594 Total : 10773.55 42.08 0.00 0.00 11802.79 10145.94 23010.42 00:26:42.594 0 00:26:42.594 13:55:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:42.594 13:55:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:42.852 13:55:39 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:26:42.852 13:55:39 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:26:42.852 13:55:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:42.852 13:55:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:42.852 13:55:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:42.852 13:55:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:43.109 13:55:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:26:43.109 13:55:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:43.109 13:55:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:26:43.109 13:55:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:43.109 13:55:40 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:26:43.109 13:55:40 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:43.109 13:55:40 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:26:43.109 13:55:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:43.109 13:55:40 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:26:43.109 13:55:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:43.109 13:55:40 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:43.109 13:55:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:43.368 [2024-07-25 13:55:40.293431] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:43.368 [2024-07-25 13:55:40.294013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2566890 (107): Transport endpoint is not connected 00:26:43.368 [2024-07-25 13:55:40.295004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2566890 (9): Bad file descriptor 00:26:43.368 [2024-07-25 13:55:40.296004] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:43.368 [2024-07-25 13:55:40.296024] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:43.368 [2024-07-25 13:55:40.296047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:43.368 request: 00:26:43.368 { 00:26:43.368 "name": "nvme0", 00:26:43.368 "trtype": "tcp", 00:26:43.368 "traddr": "127.0.0.1", 00:26:43.368 "adrfam": "ipv4", 00:26:43.368 "trsvcid": "4420", 00:26:43.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:43.368 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:43.368 "prchk_reftag": false, 00:26:43.368 "prchk_guard": false, 00:26:43.368 "hdgst": false, 00:26:43.368 "ddgst": false, 00:26:43.368 "psk": ":spdk-test:key1", 00:26:43.368 "method": "bdev_nvme_attach_controller", 00:26:43.368 "req_id": 1 00:26:43.368 } 00:26:43.368 Got JSON-RPC error response 00:26:43.368 response: 00:26:43.368 { 00:26:43.368 "code": -5, 00:26:43.368 "message": "Input/output error" 00:26:43.368 } 00:26:43.368 13:55:40 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:26:43.368 13:55:40 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:43.368 13:55:40 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:43.368 13:55:40 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@33 -- # sn=1034387856 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1034387856 00:26:43.368 1 links removed 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@33 -- # sn=742005023 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 742005023 00:26:43.368 1 links removed 00:26:43.368 13:55:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 694035 00:26:43.368 13:55:40 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 694035 ']' 00:26:43.368 13:55:40 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 694035 00:26:43.369 13:55:40 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:26:43.369 13:55:40 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.369 13:55:40 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 694035 00:26:43.369 13:55:40 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:43.369 13:55:40 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:43.369 13:55:40 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 694035' 00:26:43.369 killing process with pid 694035 00:26:43.369 13:55:40 keyring_linux -- common/autotest_common.sh@969 -- # kill 694035 00:26:43.369 Received shutdown signal, test time was about 1.000000 seconds 00:26:43.369 00:26:43.369 Latency(us) 00:26:43.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.369 =================================================================================================================== 00:26:43.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.369 13:55:40 keyring_linux -- common/autotest_common.sh@974 -- # wait 694035 00:26:43.628 13:55:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 693907 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 693907 ']' 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 693907 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 693907 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 693907' 00:26:43.628 killing process with pid 693907 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@969 -- # kill 693907 00:26:43.628 13:55:40 keyring_linux -- common/autotest_common.sh@974 -- # wait 693907 00:26:44.193 00:26:44.193 real 0m4.961s 00:26:44.193 user 0m9.616s 00:26:44.193 sys 0m1.613s 00:26:44.193 13:55:41 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:44.193 13:55:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:44.193 ************************************ 00:26:44.193 END TEST keyring_linux 00:26:44.193 ************************************ 00:26:44.193 13:55:41 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:26:44.193 13:55:41 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:26:44.193 13:55:41 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:26:44.193 13:55:41 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:26:44.193 13:55:41 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:26:44.193 13:55:41 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:26:44.193 13:55:41 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:26:44.193 13:55:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:44.193 13:55:41 -- common/autotest_common.sh@10 -- # set +x 00:26:44.193 13:55:41 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:26:44.194 13:55:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:26:44.194 13:55:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:26:44.194 13:55:41 -- common/autotest_common.sh@10 -- # set +x 00:26:46.098 INFO: APP EXITING 00:26:46.098 INFO: killing all VMs 00:26:46.098 INFO: killing vhost app 00:26:46.098 INFO: EXIT DONE 00:26:47.034 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:26:47.292 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:26:47.292 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:26:47.292 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:26:47.292 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:26:47.292 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:26:47.292 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:26:47.292 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:26:47.292 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:26:47.292 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:26:47.292 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:26:47.292 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:26:47.292 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:26:47.292 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:26:47.292 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:26:47.292 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:26:47.292 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:26:48.700 Cleaning 00:26:48.700 Removing: /var/run/dpdk/spdk0/config 00:26:48.700 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:48.700 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:48.700 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:48.700 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:48.700 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:26:48.700 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:26:48.700 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:26:48.700 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:26:48.700 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:48.700 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:48.700 Removing: /var/run/dpdk/spdk1/config 00:26:48.700 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:48.700 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:48.700 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:48.700 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:48.700 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:26:48.700 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:26:48.700 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:26:48.700 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:26:48.700 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:48.700 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:48.700 Removing: /var/run/dpdk/spdk1/mp_socket 00:26:48.700 Removing: /var/run/dpdk/spdk2/config 00:26:48.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:48.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:48.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:48.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:48.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:26:48.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:26:48.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:26:48.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:26:48.700 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:48.700 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:48.700 Removing: /var/run/dpdk/spdk3/config 00:26:48.700 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:48.701 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:48.701 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:48.701 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:48.701 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:26:48.701 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:26:48.701 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:26:48.701 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:26:48.701 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:48.701 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:48.701 Removing: /var/run/dpdk/spdk4/config 00:26:48.701 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:48.701 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:48.701 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:48.701 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:48.701 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:26:48.701 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:26:48.701 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:26:48.701 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:26:48.701 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:48.701 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:48.701 Removing: /dev/shm/bdev_svc_trace.1 00:26:48.701 Removing: /dev/shm/nvmf_trace.0 00:26:48.701 Removing: /dev/shm/spdk_tgt_trace.pid440754 00:26:48.701 Removing: /var/run/dpdk/spdk0 00:26:48.701 Removing: /var/run/dpdk/spdk1 00:26:48.701 Removing: /var/run/dpdk/spdk2 00:26:48.701 Removing: /var/run/dpdk/spdk3 00:26:48.701 Removing: /var/run/dpdk/spdk4 00:26:48.701 Removing: /var/run/dpdk/spdk_pid439192 00:26:48.701 Removing: /var/run/dpdk/spdk_pid439931 00:26:48.701 Removing: /var/run/dpdk/spdk_pid440754 00:26:48.701 Removing: /var/run/dpdk/spdk_pid441185 00:26:48.701 Removing: /var/run/dpdk/spdk_pid441883 00:26:48.701 Removing: /var/run/dpdk/spdk_pid442023 00:26:48.701 Removing: /var/run/dpdk/spdk_pid442741 00:26:48.701 Removing: /var/run/dpdk/spdk_pid442746 00:26:48.701 Removing: /var/run/dpdk/spdk_pid442988 00:26:48.701 Removing: /var/run/dpdk/spdk_pid444295 00:26:48.701 Removing: /var/run/dpdk/spdk_pid445347 00:26:48.701 Removing: /var/run/dpdk/spdk_pid445595 00:26:48.701 Removing: /var/run/dpdk/spdk_pid445832 00:26:48.701 Removing: /var/run/dpdk/spdk_pid446038 00:26:48.701 Removing: /var/run/dpdk/spdk_pid446589 00:26:48.701 Removing: /var/run/dpdk/spdk_pid446896 00:26:48.701 Removing: /var/run/dpdk/spdk_pid447060 00:26:48.701 Removing: /var/run/dpdk/spdk_pid447264 00:26:48.701 Removing: /var/run/dpdk/spdk_pid447548 00:26:48.701 Removing: /var/run/dpdk/spdk_pid449903 00:26:48.701 Removing: /var/run/dpdk/spdk_pid450066 00:26:48.701 Removing: /var/run/dpdk/spdk_pid450309 00:26:48.701 Removing: /var/run/dpdk/spdk_pid450357 00:26:48.701 Removing: /var/run/dpdk/spdk_pid450679 00:26:48.701 Removing: /var/run/dpdk/spdk_pid450791 00:26:48.701 Removing: /var/run/dpdk/spdk_pid451102 00:26:48.701 Removing: /var/run/dpdk/spdk_pid451225 00:26:48.701 Removing: /var/run/dpdk/spdk_pid451400 00:26:48.701 Removing: /var/run/dpdk/spdk_pid451525 00:26:48.701 Removing: /var/run/dpdk/spdk_pid451693 00:26:48.701 Removing: /var/run/dpdk/spdk_pid451703 00:26:48.701 Removing: /var/run/dpdk/spdk_pid452190 00:26:48.701 Removing: /var/run/dpdk/spdk_pid452350 00:26:48.701 Removing: /var/run/dpdk/spdk_pid452546 00:26:48.701 Removing: /var/run/dpdk/spdk_pid454623 00:26:48.701 Removing: /var/run/dpdk/spdk_pid457236 00:26:48.701 Removing: /var/run/dpdk/spdk_pid464096 00:26:48.701 Removing: /var/run/dpdk/spdk_pid464506 00:26:48.701 Removing: /var/run/dpdk/spdk_pid467016 00:26:48.701 Removing: /var/run/dpdk/spdk_pid467172 00:26:48.701 Removing: /var/run/dpdk/spdk_pid469804 00:26:48.701 Removing: /var/run/dpdk/spdk_pid473520 00:26:48.701 Removing: /var/run/dpdk/spdk_pid475578 00:26:48.701 Removing: /var/run/dpdk/spdk_pid482596 00:26:48.701 Removing: /var/run/dpdk/spdk_pid487807 00:26:48.701 Removing: /var/run/dpdk/spdk_pid489005 00:26:48.701 Removing: /var/run/dpdk/spdk_pid489677 00:26:48.701 Removing: /var/run/dpdk/spdk_pid499919 00:26:48.701 Removing: /var/run/dpdk/spdk_pid502204 00:26:48.701 Removing: /var/run/dpdk/spdk_pid528509 00:26:48.701 Removing: /var/run/dpdk/spdk_pid531796 00:26:48.701 Removing: /var/run/dpdk/spdk_pid535575 00:26:48.701 Removing: /var/run/dpdk/spdk_pid539460 00:26:48.701 Removing: /var/run/dpdk/spdk_pid539469 00:26:48.701 Removing: /var/run/dpdk/spdk_pid540120 00:26:48.701 Removing: /var/run/dpdk/spdk_pid540658 00:26:48.701 Removing: /var/run/dpdk/spdk_pid541315 00:26:48.701 Removing: /var/run/dpdk/spdk_pid541721 00:26:48.701 Removing: /var/run/dpdk/spdk_pid541727 00:26:48.960 Removing: /var/run/dpdk/spdk_pid541983 00:26:48.960 Removing: /var/run/dpdk/spdk_pid541994 00:26:48.960 Removing: /var/run/dpdk/spdk_pid542123 00:26:48.960 Removing: /var/run/dpdk/spdk_pid542659 00:26:48.960 Removing: /var/run/dpdk/spdk_pid543311 00:26:48.960 Removing: /var/run/dpdk/spdk_pid543969 00:26:48.960 Removing: /var/run/dpdk/spdk_pid544363 00:26:48.960 Removing: /var/run/dpdk/spdk_pid544373 00:26:48.960 Removing: /var/run/dpdk/spdk_pid544520 00:26:48.960 Removing: /var/run/dpdk/spdk_pid545403 00:26:48.960 Removing: /var/run/dpdk/spdk_pid546239 00:26:48.960 Removing: /var/run/dpdk/spdk_pid551681 00:26:48.960 Removing: /var/run/dpdk/spdk_pid576386 00:26:48.960 Removing: /var/run/dpdk/spdk_pid579428 00:26:48.960 Removing: /var/run/dpdk/spdk_pid580607 00:26:48.960 Removing: /var/run/dpdk/spdk_pid581929 00:26:48.960 Removing: /var/run/dpdk/spdk_pid581955 00:26:48.960 Removing: /var/run/dpdk/spdk_pid582086 00:26:48.960 Removing: /var/run/dpdk/spdk_pid582221 00:26:48.960 Removing: /var/run/dpdk/spdk_pid582721 00:26:48.960 Removing: /var/run/dpdk/spdk_pid583983 00:26:48.960 Removing: /var/run/dpdk/spdk_pid584711 00:26:48.960 Removing: /var/run/dpdk/spdk_pid585139 00:26:48.960 Removing: /var/run/dpdk/spdk_pid586736 00:26:48.960 Removing: /var/run/dpdk/spdk_pid587059 00:26:48.960 Removing: /var/run/dpdk/spdk_pid587617 00:26:48.960 Removing: /var/run/dpdk/spdk_pid590137 00:26:48.960 Removing: /var/run/dpdk/spdk_pid596048 00:26:48.960 Removing: /var/run/dpdk/spdk_pid598813 00:26:48.960 Removing: /var/run/dpdk/spdk_pid602603 00:26:48.960 Removing: /var/run/dpdk/spdk_pid603545 00:26:48.960 Removing: /var/run/dpdk/spdk_pid604645 00:26:48.960 Removing: /var/run/dpdk/spdk_pid607703 00:26:48.960 Removing: /var/run/dpdk/spdk_pid610199 00:26:48.960 Removing: /var/run/dpdk/spdk_pid614294 00:26:48.960 Removing: /var/run/dpdk/spdk_pid614382 00:26:48.960 Removing: /var/run/dpdk/spdk_pid617184 00:26:48.960 Removing: /var/run/dpdk/spdk_pid617316 00:26:48.960 Removing: /var/run/dpdk/spdk_pid617452 00:26:48.960 Removing: /var/run/dpdk/spdk_pid617722 00:26:48.960 Removing: /var/run/dpdk/spdk_pid617733 00:26:48.960 Removing: /var/run/dpdk/spdk_pid620494 00:26:48.960 Removing: /var/run/dpdk/spdk_pid620947 00:26:48.960 Removing: /var/run/dpdk/spdk_pid623489 00:26:48.960 Removing: /var/run/dpdk/spdk_pid625467 00:26:48.960 Removing: /var/run/dpdk/spdk_pid628881 00:26:48.960 Removing: /var/run/dpdk/spdk_pid632199 00:26:48.960 Removing: /var/run/dpdk/spdk_pid638437 00:26:48.960 Removing: /var/run/dpdk/spdk_pid642834 00:26:48.960 Removing: /var/run/dpdk/spdk_pid642903 00:26:48.960 Removing: /var/run/dpdk/spdk_pid655450 00:26:48.960 Removing: /var/run/dpdk/spdk_pid655860 00:26:48.960 Removing: /var/run/dpdk/spdk_pid656271 00:26:48.960 Removing: /var/run/dpdk/spdk_pid656680 00:26:48.960 Removing: /var/run/dpdk/spdk_pid657280 00:26:48.960 Removing: /var/run/dpdk/spdk_pid657747 00:26:48.960 Removing: /var/run/dpdk/spdk_pid658189 00:26:48.960 Removing: /var/run/dpdk/spdk_pid658601 00:26:48.960 Removing: /var/run/dpdk/spdk_pid661114 00:26:48.960 Removing: /var/run/dpdk/spdk_pid661258 00:26:48.960 Removing: /var/run/dpdk/spdk_pid665062 00:26:48.961 Removing: /var/run/dpdk/spdk_pid665220 00:26:48.961 Removing: /var/run/dpdk/spdk_pid666848 00:26:48.961 Removing: /var/run/dpdk/spdk_pid671868 00:26:48.961 Removing: /var/run/dpdk/spdk_pid671873 00:26:48.961 Removing: /var/run/dpdk/spdk_pid674770 00:26:48.961 Removing: /var/run/dpdk/spdk_pid676172 00:26:48.961 Removing: /var/run/dpdk/spdk_pid677683 00:26:48.961 Removing: /var/run/dpdk/spdk_pid679048 00:26:48.961 Removing: /var/run/dpdk/spdk_pid680420 00:26:48.961 Removing: /var/run/dpdk/spdk_pid681222 00:26:48.961 Removing: /var/run/dpdk/spdk_pid686612 00:26:48.961 Removing: /var/run/dpdk/spdk_pid687004 00:26:48.961 Removing: /var/run/dpdk/spdk_pid687395 00:26:48.961 Removing: /var/run/dpdk/spdk_pid688949 00:26:48.961 Removing: /var/run/dpdk/spdk_pid689343 00:26:48.961 Removing: /var/run/dpdk/spdk_pid689625 00:26:48.961 Removing: /var/run/dpdk/spdk_pid692073 00:26:48.961 Removing: /var/run/dpdk/spdk_pid692082 00:26:48.961 Removing: /var/run/dpdk/spdk_pid693543 00:26:48.961 Removing: /var/run/dpdk/spdk_pid693907 00:26:48.961 Removing: /var/run/dpdk/spdk_pid694035 00:26:48.961 Clean 00:26:49.219 13:55:46 -- common/autotest_common.sh@1451 -- # return 0 00:26:49.219 13:55:46 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:26:49.219 13:55:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.219 13:55:46 -- common/autotest_common.sh@10 -- # set +x 00:26:49.219 13:55:46 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:26:49.219 13:55:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.219 13:55:46 -- common/autotest_common.sh@10 -- # set +x 00:26:49.219 13:55:46 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:26:49.219 13:55:46 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:26:49.219 13:55:46 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:26:49.219 13:55:46 -- spdk/autotest.sh@395 -- # hash lcov 00:26:49.219 13:55:46 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:49.219 13:55:46 -- spdk/autotest.sh@397 -- # hostname 00:26:49.219 13:55:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:26:49.219 geninfo: WARNING: invalid characters removed from testname! 00:27:21.274 13:56:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:21.274 13:56:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:23.801 13:56:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:26.328 13:56:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:29.604 13:56:26 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:32.130 13:56:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:35.407 13:56:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:35.407 13:56:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.407 13:56:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:35.407 13:56:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.407 13:56:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.407 13:56:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.407 13:56:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.407 13:56:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.407 13:56:32 -- paths/export.sh@5 -- $ export PATH 00:27:35.407 13:56:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.407 13:56:32 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:27:35.407 13:56:32 -- common/autobuild_common.sh@447 -- $ date +%s 00:27:35.407 13:56:32 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721908592.XXXXXX 00:27:35.407 13:56:32 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721908592.3uIjbP 00:27:35.407 13:56:32 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:27:35.407 13:56:32 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:27:35.407 13:56:32 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:27:35.407 13:56:32 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:27:35.407 13:56:32 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:27:35.408 13:56:32 -- common/autobuild_common.sh@463 -- $ get_config_params 00:27:35.408 13:56:32 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:27:35.408 13:56:32 -- common/autotest_common.sh@10 -- $ set +x 00:27:35.408 13:56:32 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:27:35.408 13:56:32 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:27:35.408 13:56:32 -- pm/common@17 -- $ local monitor 00:27:35.408 13:56:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:35.408 13:56:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:35.408 13:56:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:35.408 13:56:32 -- pm/common@21 -- $ date +%s 00:27:35.408 13:56:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:35.408 13:56:32 -- pm/common@21 -- $ date +%s 00:27:35.408 13:56:32 -- pm/common@25 -- $ sleep 1 00:27:35.408 13:56:32 -- pm/common@21 -- $ date +%s 00:27:35.408 13:56:32 -- pm/common@21 -- $ date +%s 00:27:35.408 13:56:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721908592 00:27:35.408 13:56:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721908592 00:27:35.408 13:56:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721908592 00:27:35.408 13:56:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721908592 00:27:35.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721908592_collect-vmstat.pm.log 00:27:35.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721908592_collect-cpu-load.pm.log 00:27:35.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721908592_collect-cpu-temp.pm.log 00:27:35.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721908592_collect-bmc-pm.bmc.pm.log 00:27:36.346 13:56:33 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:27:36.347 13:56:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:27:36.347 13:56:33 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:36.347 13:56:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:36.347 13:56:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:36.347 13:56:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:36.347 13:56:33 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:36.347 13:56:33 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:36.347 13:56:33 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:36.347 13:56:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:36.347 13:56:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:36.347 13:56:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:27:36.347 13:56:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:27:36.347 13:56:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:36.347 13:56:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:27:36.347 13:56:33 -- pm/common@44 -- $ pid=703472 00:27:36.347 13:56:33 -- pm/common@50 -- $ kill -TERM 703472 00:27:36.347 13:56:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:36.347 13:56:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:27:36.347 13:56:33 -- pm/common@44 -- $ pid=703474 00:27:36.347 13:56:33 -- pm/common@50 -- $ kill -TERM 703474 00:27:36.347 13:56:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:36.347 13:56:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:27:36.347 13:56:33 -- pm/common@44 -- $ pid=703476 00:27:36.347 13:56:33 -- pm/common@50 -- $ kill -TERM 703476 00:27:36.347 13:56:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:36.347 13:56:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:27:36.347 13:56:33 -- pm/common@44 -- $ pid=703504 00:27:36.347 13:56:33 -- pm/common@50 -- $ sudo -E kill -TERM 703504 00:27:36.347 + [[ -n 355504 ]] 00:27:36.347 + sudo kill 355504 00:27:36.356 [Pipeline] } 00:27:36.375 [Pipeline] // stage 00:27:36.381 [Pipeline] } 00:27:36.400 [Pipeline] // timeout 00:27:36.407 [Pipeline] } 00:27:36.459 [Pipeline] // catchError 00:27:36.466 [Pipeline] } 00:27:36.479 [Pipeline] // wrap 00:27:36.484 [Pipeline] } 00:27:36.493 [Pipeline] // catchError 00:27:36.499 [Pipeline] stage 00:27:36.500 [Pipeline] { (Epilogue) 00:27:36.508 [Pipeline] catchError 00:27:36.509 [Pipeline] { 00:27:36.519 [Pipeline] echo 00:27:36.520 Cleanup processes 00:27:36.525 [Pipeline] sh 00:27:36.804 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:36.804 703607 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:27:36.804 703740 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:36.817 [Pipeline] sh 00:27:37.101 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:37.101 ++ awk '{print $1}' 00:27:37.101 ++ grep -v 'sudo pgrep' 00:27:37.101 + sudo kill -9 703607 00:27:37.113 [Pipeline] sh 00:27:37.399 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:45.541 [Pipeline] sh 00:27:45.828 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:45.828 Artifacts sizes are good 00:27:45.843 [Pipeline] archiveArtifacts 00:27:45.850 Archiving artifacts 00:27:46.080 [Pipeline] sh 00:27:46.363 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:27:46.380 [Pipeline] cleanWs 00:27:46.391 [WS-CLEANUP] Deleting project workspace... 00:27:46.391 [WS-CLEANUP] Deferred wipeout is used... 00:27:46.398 [WS-CLEANUP] done 00:27:46.400 [Pipeline] } 00:27:46.417 [Pipeline] // catchError 00:27:46.429 [Pipeline] sh 00:27:46.708 + logger -p user.info -t JENKINS-CI 00:27:46.715 [Pipeline] } 00:27:46.727 [Pipeline] // stage 00:27:46.731 [Pipeline] } 00:27:46.746 [Pipeline] // node 00:27:46.752 [Pipeline] End of Pipeline 00:27:46.782 Finished: SUCCESS